Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My assumption is that people absolutely did, and do, write like that all the time. Just not necessarily in places that you'd normally read. LLM drags up idioms from all over its training set and spews them back everywhere else, without contextual awareness. (That also means it averages across global cultures by default.)

But also, over the last three years people have been using AI to output their own slop, and that slop has made its way back into the training data for later iterations of the technology.

And then there's the recent revelation (https://www.anthropic.com/research/small-samples-poison , which I got from HN) that it might not actually take a whole lot of examples in the data for an LLM to latch onto some pattern hard.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: