That’s like Peter Norvig’s definition of AGI [1] which is defined with respect to general purpose digital computers. The general intelligence refers to the foundation model that can be repurposed to many different contexts. I like that definition because it is clear.
Currently, AGI is defined in a way where it is truly indistinguishable from superintelligence. I don’t find that helpful.
I think "being able to do as well as a 50th percentile human who's had a little practice," on a wide range of tasks, is a pretty decent measure.
Yes, that's more versatile than most of us, because most of us are not at or above the median practiced person in a wide range of tasks. But it's not what I think of when I hear "superintelligence," because its performance on any given task is likely still inferior to the best humans.
That seems like a personal definition for super intelligence. I don't think I'm alone in assuming super intelligence needs to be greater than all humans for it to be considered super vs "pretty good intelligence".
> > I think "being able to do as well as a 50th percentile human who's had a little practice," on a wide range of tasks, is a pretty decent measure.
> That seems like a personal definition for super intelligence.
I was giving a definition for artificial general intelligence as distinguished from super-intelligence, since the poster above said that most definitions of AGI were indistinguishable from super-intelligence.
To me, a computer doing as well as a practiced human at a wide swath of things is AGI. It's artificial; it's intelligence, and it's at least somewhat general.
AI is already better than a 50th percentile human on many/most intellectual tasks. Chess, writing business plans, literature reviews, emails, motion graphics, coding…
So, if we say “AI is not AGI” because 1. It can’t do physical tasks or 2. it can’t replace intellectual human labor yet in most domains (for various reasons) or 3. <insert reason for not being AGI>, then it stands to reason that by the time we reach AGI, it will already be superintelligent (smarter than humans in most domains)
> then it stands to reason that by the time we reach AGI, it will already be superintelligent (smarter than humans in most domains)
> > Yes, that's more versatile than most of us, because most of us are not at or above the median practiced person in a wide range of tasks. But it's not what I think of when I hear "superintelligence," because its performance on any given task is likely still inferior to the best humans.
> AI is already better than a 50th percentile human on many/most intellectual tasks. Chess, writing business plans, literature reviews, emails, motion graphics, coding…
Note the caveat above of "with some practice." That's much less clear to me.
??? most of us have a lot more than 100 hours of practice at things like this. Probably "some practice" means roughly the secondary background most people have, then undergraduate plus a couple of years of industry work?
If we're thinking about what human activity it will or can displace, it's probably unfair to compare it a seasoned doctorate. But it's also probably unfair to compare it to a kid with a few months of music lessons, too.
Currently, AGI is defined in a way where it is truly indistinguishable from superintelligence. I don’t find that helpful.
[1] https://www.noemamag.com/artificial-general-intelligence-is-...