Hacker Newsnew | past | comments | ask | show | jobs | submit | shiro's commentslogin

SEEKING WORK | Remote or Honolulu, HI | shiro@acm.org

SaaS back-end dev, database engine, compiler/interpreter/DSL.

Most productive with Common Lisp, Clojure, Scheme, Go, C, C++.

Past projects: https://practical-scheme.net/ShiroKawai.pdf

Github: https://github.com/shirok

Some may know me as the author of Gauche Scheme https://practical-scheme.net/gauche/, or as a Japanese translator of Paul Graham's "Hackers and Painters", Conrad Barski's "Land of Lisp", and Stuart Halloway's "Programming Clojure".


Unless you're dealing with contemporary pieces, atonal or with wild harmony, modifiers are largely the result of tonality, mode and chord progression, aren't they? I hardly need to make me "remember" them...

But yeah, I'm terrible with rhythm. I'm curious that there are different types of brain to perceive the music.


Don't confuse lisp-as-an-executable-device-to-think-meta and lisp-as-one-of-programming-language-choices. Alan is talking the former.

The idea---not just to think about programs per se, but to think about abstract properties and operations of certain category of programs and to be able to actually write it down---gave the exponential power to you. Lisp was the first such tool as a programming language that allowed you to break the ceiling of first-order thinking.

Nowadays there are other languages that provide meta-level tools. And once you get the idea, the language doesn't really matter; you can apply it to any language, albeit you may have to build scaffolds around it to compensate lack of language support (Cf. Greenspun's tenth rule). So, if you already got the idea through other languages, it's good for you. You'll see Lisp as an arcane prototype of modern languages.

But note: since Lisp can write meta-level description in the same language as the base language, you can top up this ladder infinitely, meaning you can think of program-generating-program, then program-generating-program-generating-program, and so on, using a single set of tools. It is indeed powerful but somewhat crude, so modern advanced languages intentionally restrict that aspect and just try to cover the sweet spot. Even if you're familiar with metaprogramming, it may still be worth to play with Lisp to see what (meta)*programming is like, to know its pros and cons, and to get the idea that might come handy someday when single-level metaprogramming isn't enough.


> Lisp let’s you build the language you want to use, and have it interoperate for the rest of the ecosystem.

I suspect this strength of Lisp could have undermined improvement of Lisp compilers in long term. That is, the fact that programmers can customize the language to run efficiently for their applications put less pressure to make the existing compiler better, compared to the languages that don't give programmers such flexibility.

I've worked on performance-sensitive commercial Common Lisp applications. We employed heavy macrology so that optimal instructions were generated in the performance critical regions. Effectively, it was reimplementing part of the compiler to deal with out domain-specific meta information such as parameterized types. It worked, but came with the cost of maintenance--as if we were maintaining another layer of the compiler. It was a burden. I'm sure there have been such effort spent in many other places and eventually abandoned.

Meanwhile, languages that give less power to the programmers, put lots of efforts to improving the compiler and/or the language that work more effectively with the improved compiler, and over a few decades, they have quite sophisticated compilers.

I still mostly use Lisp-family languages at work, but sometimes wonder if the power could have adverse effect in long term.


This is an excellent reply and I think it goes to the crux of the problem you were facing.

Perhaps, then, Julia makes sense in those cases.

Or, it's time for an enhanced Common Lisp implementation targeted specifically for scientific computing. I agree that one thing is "extending the language" and other is "implementing things that the compiler should have given me. "


SBCL is already suitable for scientific computing. All that's needed are good engineers who can write good libraries.


'Scientific computing' is a wide area. One recent shot at is CLASP, especially to reuse scientific computing tools from the C++ world:

https://github.com/drmeister/clasp


It depends on each individual, but having looked at how my wife breastfed my son, I wouldn't say "just offer her breast". Creating milk in women's breast isn't free. They must prepare for it, and also have to take care of their breasts. My wife said it was very tiring and also got mastitis a few times. The experience may vary and I guess it'd be generally easier for young, healthy women, though.

We couldn't switch to formula because after the initial few weeks my son refused to take it. We tried various ways but couldn't make it work except breastfeeding. If we could, I wouldn't have minded preparing formula, considering how much work my wife has done for pregnancy and delivery.


Not significantly, but in the Hiragana page there are some interesting differences from standard Romanization.

Most notable one is conflation of vowel [イ] and [エ]. Often both are denoted with "i", sometimes one of them is "yi". In modern American English the sound /i/ falls in middle of Japanese イ and エ. I'm not sure in this case that it's because of that, or Ryukyu dialect had shifted vowels.

セ is denoted as "she" (usually "se"). This variation of consonant appears in some Japanese dialect.

ヒ is denoted as "fi". Might stem from old Japanese pronunciation.

Curiously, ヰ is denoted as "i" and ヱ is denoted as "yi/ye/e". Usually they are "wi" and "we", but those pronunciations have been lost in modern Japanese.


I guess it depends on each child. My son is diagnosed ADHD and even when I'm teaching him 1-on-1 he keeps moving some part of his body or change his posture frequently. It bothered me before, but I learned that for him it's actually easier to work on the task. Restricting movement seems to make some part of his brain hyperfocused and in a short period of time his brain "shuts down".

In his school, the classroom is a kind of free style and although he has assigned desk he can choose other places to work on his task, which seems to help him a lot. (It's Montessori, so most of the time each kid works independently according to his/her own study plan.)


That's exactly what I mean. Children without ADHD are able to sit still and focus at the same time, building that muscle.

Managing ADHD seems to be (from my non-medical non-professional perspective) all about training and using that capacity to focus.

Even for ADHD children, sitting still trains the brain, it's just that they can't stand it for very long, and there seems to be no benefit in forcing them much beyond what they feel comfortable with.


I'm not sure what source you derive from about the particularity of "sitting still". You imply that not being distracted from the given task? Or the physicality of sitting still has the benefit? (The two can be distinguished easily---if the latter is the case, "sitting still without doing anything, and just daydreaming wildly" would also have benefit.) Or you mean meditation? That's a different activity at all.

In the ADHD case, it seems about the way of processing stimuli. They (or at least my kid) need a sort of synchronization stimulus (or, outside distrubance, in the way that disturbance suppresses the divergence of hypersensitive systems) to keep his mind on rail. If no such stream of stimuli is provided, he must create one by his own. I try to make him find and build his own toolset to work with. (One of the activities, for example, is tapping along metronome while doing other tasks.)

[Edit] I see your comment in other thread that you refer to Zen meditation. I've learned meditation and I agree on its benefits, but it can't be applied to the current discussion of school setting---"sitting still listening lectures" and "sitting still meditating" are very different activities. The latter would certainly develop the ability of the former, but I'm dubious about just forcing the former.


Your third example isn't valid. It needs a bit of tweak.

Tarou ga Noriko wo mita toshokan

The particle "ga" and "wa" both introduce a topic. But in a phrase to explain a noun, we use "ga" exclusively. Your main point still holds, in a sense that "Tarou ga Noriko wo mita" is a valid sentence. But to be precise, "mita" in those two sentences are different conjugated forms; it just happens that two conjugated forms are the same in the verb "miru" (to see).


> The particle "ga" and "wa" both introduce a topic

Forgive me for saying this, since you seem to be a native speaker, but don't you mean that they both introduce the subject, not topic (using 'topic' as a linguistic term)?

"Wa" would be the topicalising subject marker, denoting known information:

Tanaka wa nihon ni itta.

Tanaka went to Japan. -> As for Tanaka, he went to Japan. Tanaka = known information (i.e. Tanaka is familiar to the listener)

"Ga", while also a subject marker could denote/introduce new information:

Tanaka ga nihon ni itta.

Tanaka went to Japan. -> e.g. It was Tanaka who went to Japan.

Tanaka = new information (e.g. the listener is did not not Tanaka was the one going to Japan.)

(Note: I realise there are other constructions for my interpretation of the ga-sentence)


My knowledge of Japanese grammar is in Japanese, so I'm not certain about the English term of 主語, to be honest. We use the same term to describe 'subject' in English grammar. I used 'topic' just because the original article used it.

Your explanation of 'ga'/'wa' is spot on as far as I can understand as a layman of native speaker with standard Japanese grammar education in Japan but no advanced linguistic degree.

I'd say that, because 'wa' emphasizes the introduced subject as the center of interest, it isn't used in the subordinate clause.

Tanaka ga nihon ni itta hi. (The day Tanaka went to Japan) ; ok - the interest is on 'hi'

Tanaka wa nihon ni itta hi. ; invalid


Thanks for your reply. I believe 主語 covers both subject and topic. Since an English sentence such as "John loves Mary." can be understood as e.g. "It is John (not James) who loves Mary." or "John loves Mary (not Lisa)", it might have several formal representations in Japanese via e.g. the use of wa/ga.

Also, see user gizmo686's excellent explanation for one approach below.


Not a native speaker, but have studied Japanese linguisticly (as well as as a second language).

Wa is a bit of a complicated topic. The prevailing thinking is (roughly) that it has two distinctive meanings: topic marking, and contrastive. As a topic marker, wa does not introduce the subject (although in many cases, there is a null anaphora referring to the topic).

In anycase, the common linguistic explanation for shiro's correction is that the subject of subordinate clauses resists topicalization.


(Is this where people start flaunting their phd:s, professor titles? j/k ;-) academic here as well - I do not hold a phd)

Well, I realise the topic + contrast bit but is it really treated as a null anaphora, rather than acting as both topic and subject marker in my example...? My examples referred to information structure more than anything.

Yes, I realise you can have sentences like "Ashita wa Tanaka ga..."/"Sou wa hana ga nagai." - I've even seen a discussion on double topics (some old, theoretical text by Yasuo Kitahara IIRC, probably more known for 'Mondai-na Nihongo'). I also realise that in some contexts where it seems to denote a subject its noun is only a topic ("watashi wa unagi desu").

Logically, it would indeed be quite difficult for a subordinate clause to contain the/a topic.

Anyway, I'm curious if you happen to have further explanations (or articles)!

(Unrelated note: why is it that Japanese of all things make us crawl out from under our rocks...? :-))


No PHD here either, just undergrad followed by some hobbiest reading (of scholarly sources) on Japanese linguistics.

To be clear, the comment about null anaphora was more of a throwaway comment anticipating the objection that sometimes the topicalizing wa does mark the subject. While I have seen this explanation presented, and it is my prefered explanation, I would not necessarily call it pervasive. Now, for the explanation itself (unfourtantly, I am on vacation, so cannot check any of my references).

Japanese is a clear example of a pro-drop language, so using pronoun dropping (aka, null anaphora) as an explanation requires less justification than it would in English, where we only see it in specific contexts. Additionally, we see the topicalizing "wa" in various contexts, not all of which can be understood as subjects, so a unified explanation that can account for all of them would be preferred.

For example, consider the sentence

1) Mary-ga ringo-o tabeta

We can topicalize Mary with the following derivation:

2) Mary-wa Mary-ga ringo-o tabeta

3) Mary-wa anohito-ga ringo-o tabeta (Pro-form substitution)

4) Mary-wa ringo-o tabeta (Deletion)

Simmilarly, we can topicalaize ringo with

2) Ringo-wa Mary-ga ringo-o tabeta

3) Ringo-wa Mary-ga are-o tabeta (Pro-form substitution)

4) Ringo-wa Mary-ga tabeta (Deletion)

We also have the following sentence (kudamono = fruit)

Kudamono-wa Mary-ga ringo-o tabeta

Admittedly, I struggle to think of a context where the speaker would not drop Mary due to context, but that should not be relevent here, and I am sure that there exists better examples.

Notice that, under the null anaphora explanation, all three of these examples could be explained in the same way. If we were to explain the first example as wa being a subject marker, then we would need to explain the second example as wa being an object marker, and the third example as wa being just a topic marker.

I have seen an alternative explanation that describes topicalization in Japanese as a transformation rule. I have mostly seen this by researchers who view Japanese non-configurationally, who argue that a rule such as [ga/o] -> [wa] in a non configurational language is directly analogous to a movement rule in a configurational analysis. Even under this approach, you still need to account for sentences where the topic has no co-referential place in the rest of the sentence.

Further, even under this alternative explanation, I would still not call wa a topic marker. Rather, I would say that when the listener reconstructs the deep-structure, he uses pragmatics to infer what syntactic role the topic plays. Indeed, If you consider a sentences such as ringo wa tabeta and Mary wa tabeta you can see that there is no syntactic way to identify where the topic falls in the deep structure.


Thank you so much for this writeup - it does ring a bell!

Don't worry about references. This is more than enough to get me re-started, dig through my old books/articles and find new ones.

> Further, even under this alternative explanation, I would still not call wa a topic marker.

-> "subject marker"? ;-)

Enjoy your vacation (and maybe pursue a phd)!


I took that the author didn't assume that experience (which seemed to be assumed in the article to be positively correlated to the length of career) was the only factor---lack of proper training, including knowing "classics", was also the issue. I tend to agree the latter premise, although I'm not sure the situation is as bad as the author describes. (The author says "I can’t think of a single developer I’ve met professionally who belong to the ACM or to IEEE". Is that really the case?)

IMHO, it's ok to lift answers from SO as long as 1) you know it's a shortcut and 2) when necessary you can trace back the history to the origin so that you can know the original frame the technique was invented and how it was modified along the line. A good solution in a certain context might not be optimal for your context, even though it has been improved by many.

The second point requires a certain level of skill---able to search and read CS papers and implement by yourself or incorporate the ideas into your domain. Without having proper training, it's difficult to acquire that kind of skill solely from skimming SO and alike. But I'd like to assume that graduate-level CS course do give such skill, and by experience you can hone it.

I do note that some answers in SO are pretty decent, with references to the original papers. When I answer online questions I try to do the same as much as possible within my ability and knowledge.


Yup. Shifting to streaming then content creation does make sense, yet the fact that they've successfully steered such shift in this scale is amazing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: