Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Once all the context that a typical human engineer has to "build software" is available to the LLM, I'm not so sure that this statement will hold true.




But it's becoming increasingly clear that LLMs based on the transformer model will never be able to scale their context much further than the current frontier, due mainly to context rot. Taking advantage of greater context will require architectural breakthroughs.

Will it though? The human mind can hold less context at any one time than even a mediocre LLM. The problem isn't architecture. It's capturing context. Most of it is in a bunch of people's heads and encoded in the physical world. Once it's digitized and accessible through search, RAG, or whatever, the LLM will be able to use it effectively.

Human hold a lot of implicit context, I think far beyond any LLM. Context is not just what you consciously are thinking about in your head

Sure, but so do LLMs models. They have a huge subconscious (the model itself).

Recording every conversation a single person ever had, every book or text or site ever read, everything ever seen, is not a huge amount of data. Microsoft attempted this with a digital camera lanyard but they were too early.


Yeah, but the models are all based on explicit data. I'm saying humans have prior wiring that allows them to extract and keep context that LLMs do not have access to.

So the suggestion here is that RAG, tools, LLM memory, fine tuning, context management etc are not enough to take advantage of all this context? Is there any evidence that these things aren't on a trajectory to be optimized enough to do the job?



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: