Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Once the LLM has made one mistake, it's often best to start a new context.

Since its mechanism is to predict the next token of the conversation, it's reasonable to "predict" itself making more mistakes once it has made one.





I‘m not sure this is still the case with codex. In this instance, restarting had no strong effect.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: