Many people seem to be claiming that "LLMs do what humans do / humans also hallucinate", as if the process of human knowledge is identical to the purely semantic knowledge of LLMs.
No. Human beings have experiential, embodied, temporal knowledge of the world through our senses. That is why we can, say, empirically know something, which is vastly different than semantically or logically knowing something. Yes, human beings also have probabalistic ways of understanding the world and interacting with others. We have many other forms of knowledge as well and the LLM way of interpreting data is by no means the primary way in which we feel confident that something is true or false.
That said, I don't get up in arms about the term "hallucination", although I prefer the term confabulation per neuroscientist Anil Seth. Many clunky metaphors are now mainstream, and as long as the engineers and researchers who study these kinds of things are ok with that, that's the most important thing.
But what I think all these people who dismiss objections to the term as "arguing semantics" are missing is the fundamental point: LLMs have no intent, and they have no way of distinguishing what data is empirically true or not. This is why the framing, not just the semantics, of this piece is flawed. "Hallucinations" is a feature of LLMs that exists at the very conceptual level, not as a design flaw of current models. They have pattern recognition, which gets us very far in terms of knowing things, but people who only rely on such methods of knowing are most often referred to as conspiracy theorists.
On one hand this is true. On the other hand it does seem possible, in theory, that through enough post-training and other measures they could become a bit closer to human minds and not just be token guessers.
The human brain may at its fundamental level operate on the principles of predictive processing (https://slatestarcodex.com/2017/09/05/book-review-surfing-un...). It might be that it has many layers surrounding that raw predictive core which develop us into epistemological beings. The LLMs we see today may be in the very early stages of a similar sort of (artificial) evolution.
The reflexiveness with which even top models like Opus 4.5 will sometimes seamlessly confabulate things definitely does make it seem like it is a very deep problem, but I don't think it's necessarily unsolvable. I used to be among the vast majority of people who thought LLMs were not sufficient to get us to AGI/ASI, but I'm increasingly starting to feel that piling enough hacks atop LLMs might really be what gets there before anything else.
No. Human beings have experiential, embodied, temporal knowledge of the world through our senses. That is why we can, say, empirically know something, which is vastly different than semantically or logically knowing something. Yes, human beings also have probabalistic ways of understanding the world and interacting with others. We have many other forms of knowledge as well and the LLM way of interpreting data is by no means the primary way in which we feel confident that something is true or false.
That said, I don't get up in arms about the term "hallucination", although I prefer the term confabulation per neuroscientist Anil Seth. Many clunky metaphors are now mainstream, and as long as the engineers and researchers who study these kinds of things are ok with that, that's the most important thing.
But what I think all these people who dismiss objections to the term as "arguing semantics" are missing is the fundamental point: LLMs have no intent, and they have no way of distinguishing what data is empirically true or not. This is why the framing, not just the semantics, of this piece is flawed. "Hallucinations" is a feature of LLMs that exists at the very conceptual level, not as a design flaw of current models. They have pattern recognition, which gets us very far in terms of knowing things, but people who only rely on such methods of knowing are most often referred to as conspiracy theorists.