I doubt it. Human intelligence evolved from organisms much less intelligent than LLMs and no philosophy was needed. Just trial and error and competition.
We are trying to get there without a few hundred million years of trial and error. To do that we need to lower the search space, and to do that we do actually need more guiding philosophy and a better understanding of intelligence.
If you look at AI systems that have worked like chess and go programs and LLMs, they came from understanding the problems and engineering approaches but not really philosophy.
Instead what they usually do is lower the fidelity and think they've done what you said. Which results in them getting eaten. Once eaten, they can't learn from mistakes no mo. Their problem.
Because if we don't mix up "intelligence" the phenomenon of increasingly complex self-organization in living systems, with "intelligence" our experience of being able to mentally model complex phenomena in order to interact with them, then it becomes easy to see how the search speed you talk of is already growing exponentially.
In fact, that's all it does. Culture goes faster than genetic selection. Printing goes faster than writing. Democracy is faster than theocracy. Radio is faster than post. A computer is faster than a brain. LLMs are faster than trained monkeys and complain less. All across the planet, systems bootstrap themselves into more advanced systems as soon as I look at 'em, and I presume even when I don't.
OTOH, all the metaphysics stuff about "sentience" and "sapience" that people who can't tell one from the other love to talk past each other about - all that only comes into view if one were to what's happening with the search space if the search speed is increasing at a forever increasing rate.
Such as, whether the search space is finite, whether it's mutable, in what order to search, is it ethical to operate from quantized representations of it, funky sketchy scary stuff the lot of it. One's underlying assumptions about this process determine much of one's outlook on life as well as complex socially organized activities. One usually receives those through acculturation and may be unaware of what they say exactly.
Watch a coding agent adapt my software to changing requirements and you'll realise just how far spiders have to go.
Just kidding. Personally I don't think intelligence is a meaningful concept without context (or an environment in biology). Not much point comparing behaviours born in completely different contexts.
"Some tests can be cheesed by a statistical model" is much less sexy and clickable than "my computer is sentient", but it's what's actually going on lol
I'm nowhere implying that it's impossible to replicate, just that LLMs have almost nothing to do with replicating intelligence. They aren't doing any of the things even simple life forms are doing.
But it would be more honest and productive imo if people would just say outright when they don’t think AGI is possible (or that AI can never be “real intelligence”) for religious reasons, rather than pretending there’s a rational basis.
AGI is not possible because we dont yet have a clear and commonly agreed definition of intelligence and more importantly we dont have a definition for consciousness nor we can define clearly (if there is) the link between those two.
until we got that AGI is just a magic word.
When we will have those two clear definitions that means we understood them and then we can work toward AGI.
Plenty of things could theoretically exist that aren't possible and likely will never be possible.
Like, sure, a Dyson sphere would solve our energy needs. We can't build one now and we almost certainly never will lol
"AGI" is theoretically feasible, sure. Our brains are just matter. But they're also an insanely complex and complicated system that came out of a billion years of evolution.
A little rinky dink statistical model doesn't even scratch the surface of it, and I don't understand why people think it does.
Sorry you got triggered. I know it can be an emotional topic for some people. I'll try to explain in a simple way.
We clearly are replicating at least some significant aspects of human intelligence via LLMs, despite biological complexity. So we obviously don't need a 100% complete understanding of the corresponding biology to build things which achieve similar goals.
In other words, we can (conceivably) figure out how intelligence works and how to produce it independently of figuring out exactly how the human brain produces intelligence, just like we learned the laws of aerodynamics well enough to build airplanes independently of understanding everything about the biology of birds.
Whether we will achieve this or not to the point of AGI is a separate engineering question. I'm only pointing out how flawed these lines of argument are.
> We clearly are replicating at least some significant aspects of human intelligence via LLMs
That's a very load-bearing "clearly". I don't think it's clear at all lol.
Again, you are vastly underestimating the scale here.
Heavier-than-air flight is (relatively) straightforward and it's easy for humans to build models. Also, you know when you're flying.
Building a star is theoretically also relatively straightforward -- just collect a lot of gas and dust in one area and wait for gravity to do its thing.
Actually doing that is left as an exercise to the reader.
> That's a very load-bearing "clearly". I don't think it's clear at all lol.
Are you really disputing that LLMs can replicate some aspects of human intelligence? I mean, they're often passing the Turing test, writing non-trivial programs, and got a gold medal in the IMO.
Maybe you aren't well informed on their actual capabilities?
> Heavier-than-air flight is (relatively) straightforward and it's easy for humans to build models.
Actually, before the first airplanes were created, plenty of people were making arguments very similar to yours to dismiss human flight as impossible.
"Heavier-than-air flying machines are impossible." - Lord Kelvin, 1895
When you try to solve a problem the goal or the reason to reject the current solution are often vague and hard to put in words. Irrational. For example, for many years the fifth postulate of Euclid was a source of mathematical discontent because of a vague feeling that it was way too complex compared to the other four. Such irrationality is a necessary step in human thought.
Yes, that’s fair. I’m not saying there’s no value to irrational hunches (or emotions, or spirituality). Just that you should be transparent when that’s the basis for your beliefs.
rationalism has become the new religion. Roko's basilisk is a ghost story and the quest for AGI is today's quest for the philosopher's stone. and people believe this shit because they can articulate a "rational basis"
Wouldn't it be nice if LLMs emulated the real world!
They predict next likely text token. That we can do so much with that is an absolute testament to the brilliance of researchers, engineers, and product builders.
I doubt it. Human intelligence evolved from organisms much less intelligent than LLMs and no philosophy was needed. Just trial and error and competition.