There you go. Try it with 4. Still doesn't work? Let's revisit when 5 comes out. You will lose the argument eventually, it's only a question of when.
with deriving the geometric product (Clifford algebra) based on a set of definitions and axioms (e.g., the distributive property). Unfortunately, it failed, making numerous errors along the way.
Also maybe try asking it something that ordinary humans can be expected to accomplish, rather than complaining that nobody has invented Ramanujan-as-a-Service yet.
Also: Further, "AI" beating masters of chess also seems to be a product of "beat a human with probabilistic guesses given a large data set".
That was true for chess. Lee Se-Dol's defeat by AlphaGo was different. That wasn't supposed to happen. That was a combination of probabilistic Monte Carlo methods and... something else entirely.
That's really when the train of "AI can do X but not Y" left the tracks, IMO.
Also maybe try asking it something that ordinary humans can be expected to
accomplish, rather than complaining that nobody has invented
Ramanujan-as-a-Service yet.
Hmm... is it too much to ask such an "AI" for formal (i.e., well-defined) stuff?
I would argue that mathematics or any formal language is perhaps more accurate than human language.
You will lose the argument eventually, it's only a question of when.
Oh, I hope so, but I am skeptical, as I am still not convinced of an LLM being an "AI".
> You appear to assume that current "AI" is able to "understand" and "think". What makes you so sure?
"To understand" and "to think" are two very different things. Understand means more or less to encode and compress effectively from a perspective of such a system - there's quite a bit of evidence that they do that.
As for "thinking" that is impossible for LLMs as thinking is an action - and LLMs aren't agents that can plan and take action.
Actually AlphaGO and AlphaZero were agents capable of thinking - just in a extremly simplistic world which is the game of Go, Shogi or Chess. But they had a world model (which was fully known as it was for simple games) and a way to plan the action they will take by evaluating what impact will they have upon the world and how beneficial it will be for them.
Just that extending that system/agent to the real world is very hard.
You appear to assume that current "AI" is able to "understand" and "think". What makes you so sure?
A flippant way to answer the question might be to ask, "What makes you so sure you're not arguing with an AI right now?"
A less-flippant answer is simply that we would need to agree on what understanding and thinking mean before we could debate whether AI is doing it now or can potentially do it in the future. I doubt we'll come to agreeable terms there, but I'd start by suggesting a simplistic hierarchy of cognition:
0) Memory: the ability to store information. Trivial.
1) Knowledge: the ability to retrieve that information associatively.
By that I'd imply a requirement that knowledge must be retrievable by indexing at a conceptual level. Knowledge isn't "I have four bananas and the third is ripe"; that's memory. Knowledge is "Bananas are edible fruits." It's more or less unquestionable that this is satisfied by embedding memorized items in a high-dimensional vector space where conceptual similarity corresponds to distance measured by some norm or another.
2) Understanding: the ability to cast new knowledge in terms of existing knowledge. This is where the increasingly-popular compression-as-intelligence metaphors start to come into play. Anytime you use an embedding scheme to compress data, you are exhibiting understanding. When Stable Diffusion encodes information about entire works of art with one or two bits of actual information, which it can later use to synthesize new works, that's "understanding" IMO.
3) Reasoning: I'm fine with the definitions of inductive reasoning (applying specific understanding to the solution of general problems) and deductive reasoning (deriving specific outputs from general understanding.) AI models perform induction whenever they recognize and encode patterns, and they perform deduction anytime they solve a specific problem using those patterns. At this point the burden of proof falls on anyone who claims these processes aren't happening.
So that leaves "thinking", which is a topic in philosophy rather than neuroscience or compsci, and forms of reasoning that go beyond the simplistic inductive and deductive models. Hypothesis formation is interesting, for instance. That's one area where someone using a GPT 3.x model might walk away with their negative preconceptions regarding AI being fully validated, as you did, while someone using GPT 4 might walk away with a furrowed brow and a dazed look. Maybe, once the previous bases are covered, Occam's Razor is all you need to perform abductive reasoning. If not, what else?
These are all interesting points of debate that go much deeper than "LOL, it can't even construct proofs using Clifford algebra." If you're satisfied with that, then again, there's not much room for the conversation to progress.
There you go. Try it with 4. Still doesn't work? Let's revisit when 5 comes out. You will lose the argument eventually, it's only a question of when.
with deriving the geometric product (Clifford algebra) based on a set of definitions and axioms (e.g., the distributive property). Unfortunately, it failed, making numerous errors along the way.
Also maybe try asking it something that ordinary humans can be expected to accomplish, rather than complaining that nobody has invented Ramanujan-as-a-Service yet.
Also: Further, "AI" beating masters of chess also seems to be a product of "beat a human with probabilistic guesses given a large data set".
That was true for chess. Lee Se-Dol's defeat by AlphaGo was different. That wasn't supposed to happen. That was a combination of probabilistic Monte Carlo methods and... something else entirely.
That's really when the train of "AI can do X but not Y" left the tracks, IMO.