Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with current LLMs is that they are so sycophantic that they don't tell you when you're asking the wrong questions. I've been down the path of researching something with an LLM, and I came to a conclusion. I later revisited the subject and this time I instead read a longer-form authoritative source and it was clear that "interrogating" the matter meant I missed important information.


Which have you been using and through which interface? Try using GPT-5 through the API, without a harness. Whatever model you use, the key phrase is "Why or why not? If so, explain why. If not, explain why not.".

Telling when you're asking the wrong questions is also a skill that the curious such as yourself will develop in no time through experience. Are you saying this didn't give a shred of intuition how to do so in the future?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: