Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most people who say "AGI" really mean either "ASI" or "Recursive Self Improvement".

AGI was already here the day ChatGPT released: That's Peter Norvig's take too: https://www.noemamag.com/artificial-general-intelligence-is-...



The reason some people treat these as equivalent is that AI algorithm research is one of the things a well-educated adult human can do, so an AGI who commits to that task should be able to improve itself, and if it makes a substantial improvement, then it would become or be replaced by an ASI.

To some people this is self-evident so the terms are equivalent, but it does require some extra assumptions: that the AI would spend time developing AI, that human intelligence isn't already the maximum reachable limit, and that the AGI really is an AGI capable of novel research beyond parroting from its training set.

I think those assumptions are pretty easy to grant, but to some people they're obviously true and to others they're obviously false. So depending on your views on those, AGI and ASI will or will not mean the same thing.


Funny but the eyebrow-raising phrase 'recursive self-improvement' is mentioned in TFA in an example about "style adherence" that's completely unrelated to the concept. Pretty clearly a scam where authors are trying to hack searches.

Prerequisite for recursive self-improvement and far short of ASI, any conception of AGI really really needs to be expanded to include some kind of self-model. This is conspicuously missing from TFA. Related basic questions are: What's in the training set? What's the confidence on any given answer? How much of the network is actually required for answering any given question?

Partly this stuff is just hard and mechanistic interpretability as a field is still trying to get traction in many ways, but also, the whole thing is kind of fundamentally not aligned with corporate / commercial interests. Still, anything that you might want to call intelligent has a working self-model with some access to information about internal status. Things that are mentioned in TFA (like working memory) might be involved and necessary, but don't really seem sufficient




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: