Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it is a combination of issues (from most unlikely to most likely):

- They might've not been prepared for the growth;

- They were prepared, but decided to focus on the 95% of "regular people with easy questions" that treat it as a curiosity instead of the fewer people with difficult questions. Most regular people have no idea who is OpenAI, what a LLM is or how a GPT works, but they know "ChatGPT, the brand". Since it became a household name so quickly, it would be far better that the AI is just a little underwhelming sometimes than for it to not be able to serve so many people.

- The corpus used to generate it was trained on a staggering amount of content. This includes fringe and unmoderated content. Imagine you asked a question about WW2, and being trained, lets say on 4chan, the model responds with a very charitable bias about the solid reasoning behind the reich's actions at the time... It does not look good for investors, for the company, and attracts scrutiny. Even more innocuous themes are enough to invite all kinds of bad faith debate, radical criticism and whatnot... and the darling of the "IA revolution" certainly does not need (or want) coverage outside of their roadmap wishes.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: