Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am impressed with LLMs but I think their inability to produce an honest "I don't know" instead of hallucinating is an issue.


True, but the problem is that they never know. They don't interact with the real world and have no way of verifying their training data's accuracy. Perhaps they could assign a confidence level to their response? But then, if they assign a response a high confidence level and produce an incorrect answer, it will compound their failure. Not only were they wrong, but they vouched for their wrong answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: