Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exponential curves are theoretical constructs: all actual phenomena are S-shaped.

The question is only when does the "exponential regime" turn into the flat; and the answer is often fairly obvious if you don't begin from the "time = magic" premise.

There's an entire industry of public (pseudo)-intellectual who's schtick is to draw logistic phenomena with exponential curves and then cry, "the sky is falling!".



>Exponential curves are theoretical constructs: all actual phenomena are S-shaped.

This. I've been thinking much the same recently, although not expressed so succinctly. Nothing in nature is ever exponential (for very long).


On the contrary, few experts expected this performance from an AI this soon too.

If you can identify one or two aspects of the human “general” intelligence that an AI cannot ever possess, even in principle, I think a lot of people would be grateful.


In animals, propositional knowledge is built from procedural knowledge; and it can't really be otherwise.

What AI does at the moment is approximate propositional knowlegde with statistical associations, rather than take the procedural route. But this fails because P(A|B) doesnt say whether A causes B, B causes A, A is B, A and B are causally unrelated, etc.

What is the procedural route? To perform actions with your body so as to disambiguate the cases. Animals have causal models of their bodies which are unambiguous and their actions are intentional and goal-directed and effectively "express hypotheses" about the nature of the world. In doing so, they can build actual knowledge of it.

There's at least some good reasons to suppose that "bodies which express hypotheses in their actions" require organic properties to do so: becuase you have to have adaption from bottom-up to top-down to really have "the mind" grow the body in the relevant ways.

In other words, every action an animal performs isnt clockwork: in acting, it's body and mind change. Every action is a top-down, bottom-up whole change to the animal.


This is a very interesting hypothesis that could be quite true for living beings. What I disagree with is that having an animal-like body is necessary for the process of forming a world model. A simulation could be sufficient. And there is already work on that front. (Also, I would not characterize deep-learning-based AI as trying to form propositional knowledge. In fact, its great performance partly stems from not dealing with propositional knowledge directly.)

If a body is in fact necessary, PaLM-E could be paving a way toward it as well. https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal...


You might be interested in this thread by a DeepMind’s research scientist. https://mobile.twitter.com/AndrewLampinen/status/16396602197...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: