Sure, but so do LLMs models. They have a huge subconscious (the model itself).
Recording every conversation a single person ever had, every book or text or site ever read, everything ever seen, is not a huge amount of data. Microsoft attempted this with a digital camera lanyard but they were too early.
Yeah, but the models are all based on explicit data. I'm saying humans have prior wiring that allows them to extract and keep context that LLMs do not have access to.
So the suggestion here is that RAG, tools, LLM memory, fine tuning, context management etc are not enough to take advantage of all this context? Is there any evidence that these things aren't on a trajectory to be optimized enough to do the job?