Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Insightful comment, but you are assuming that the capabilities of this type of technology will remain static. Maybe that was also a premise of the article.

But we know that these systems will not stop improving. Just in recent weeks there are scientific papers documenting LLMs with improved capabilities as well as clear reports of more capable models in the business pipeline. And papers describing major upgrades to the models such a as adding a visual modality to the data.

But beyond the last few weeks, there is an exponential trend in the capabilities of the hardware and systems. And a clear track record of creating new paradigms when there is a roadblock.

In the next several years we will see multimodal large models that have much better world understanding grounded in spatial/visual information. Cognitive architectures that can continuously work on a problem with much better context, connected to external systems to automatically debug or interact with user interfaces in a feedback loop. Entirely new compute platforms like memristors or similar that increase efficiency and model size by several orders of magnitude. All of this is well under way.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: