Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's even more interesting if you take into consideration that for Claude, making it be more verbose and "think" about its answer improves the output. I imagine that something similar happens with GPT, but I never tested that.


I have been wondering that now that the context windows are larger if letting it “think” more will result in higher quality results.

The big problem I had earlier on, especially when doing code related chats, would be be it printing out all source code in every message and almost instantly forgetting what the original topic was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: