Airbnb wanted access to my bank account transaction details (via Plaid) a while ago, "to verify my credit card". Hotels have never looked so appealing.
At some point booking.com decided it doesn't want to accept my money because I'm a fraud, apparently, so I use it to search and then book directly at the hotel, and booking.com doesn't get their commission.
Does it really matter how the LLM got to a (correct) conclusion?
As long as the explanation is sound as well and I can follow it, I don't really care if the internal process looked quite different, as long as it's not outright deceptive.
I think we're noticing that our goalposts for AGI were largely "we'll recognize it when we see it", and now as we are getting to some interesting places, it turns out that different people actually understood very different things by that.
I think many people are now learning that their definition of intelligence was actually not very precise.
From what I've seen, in response to that, goalposts are then often moved in the way that requires least updating of somebody's political, societal, metaphysical etc. worldview. (This also includes updates in favor of "this will definitely achieve AGI soon", fwiw.)
The goal posts were never set at the "Turing test"
It's not a real thing. You do not remember the goal posts ever being there.
Turing put forth a thought experiment in the early days of some discussions about "artificial" thinking machines on a very philisophical level.
Add to that, nobody who claims to have "passed" the turing test has ever done an actual example of that thought experiment, which is about taking two respondents and finding out which is human. It is NOT talking to a single respondent and deciding whether they are an LLM or not.
It also has never been considered a valid "test" of "intelligence" as it was obvious from the very very beginning that tricking a person wasn't really meaningful, as most people can be tricked by even simple systems.
ELIZA was the end of any thought around "The turing test", as it was able to "trick" tons of people and show how useless the turing thought experience was. Anyone who claims ELIZA is intelligent would be very silly.
Arguably there isn't even a widely shared, coherent definition of intelligence: To some people, it might mean pure problem solving without in-task learning; others equate it with encyclopedic knowledge etc.
Given that, I consider it quite possible that we'll reach a point where even more people will consider LLMs having reached or surpassed AGI, while others still only consider it "sufficiently advanced autocomplete".
I'd believe this more if companies weren't continuing to use words like reason, understand, learn, and genius when talking about these systems.
I buy that there's disagreement on what intelligence means in the enthusiast space, but "thinks like people" is pretty clearly the general understanding of the word, and the one that tech companies are hoping to leverage.
The defining feature of true AGI, in my opinion, is that the software itself would decide what to do and do it without external prompts more than environmental input.
Doubly so if the AGI writes software for itself to accomplish a task it decided to do.
Once someone has software like that, not a dog that is sicced on a task, but a bloodhound that seeks out novelty and accomplishment for its own personal curiosity or to test its capabilities, then you have a good chance of convincing me that AGI has been achieved.
6 alphanumeric, case insensitive characters only allow for about 2 billion unique combinations. I’d have guessed there were more reservations made than that?
That's only true in classical electrodynamics, as it happens. If you're in a very strong B-field like you might find near a compact object you'll get nonlinear QED effects.
reply