Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I still see little conversation about the two fundamental limitations of LLMs right now: context size, and prompt injection.

* Computation does not scale linearly with context size, meaning the ‘memory’ of LLMs is limited and gets more expensive as it gets bigger.

* Prompt injection limits the usability of LLMs in the real world. How can you put an LLM in the driving seat if malicious actors can talk it into doing something it’s not supposed to.

Whenever I see a blog post by Anthropic or OpenAI I do a Ctrl+F for “prompt injection.” Never mentioned. They want people to forget this is a problem — because it’s a massive one.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: