Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Can GPT Be Trained for Truth? Exploring Hallucination Reduction (chat.openai.com)
4 points by samuelaidoo45 on May 23, 2024 | hide | past | favorite | 2 comments
I've been experimenting with ways to encourage ChatGPT to generate more factual outputs. While we know it excels at creative text formats, factual accuracy can sometimes be...well, imaginative.

My approach involved using specific prompts to steer it towards factual responses.

I'm curious to hear from the HN community:

-Have you explored techniques for prompting factual responses in GPT models?

-Are there interesting applications for a "factually-focused" ChatGPT?

Let's discuss ways to push the boundaries of factual language models through creative prompting



This does not work at all. I tried several questions and got many hallucinations. E.g. I ask who was the founder of my company and got wrong answers.

I have conducted similar experiments in the past and concluded that promping does not reduce hallucinations.


I asked who was the founder of my company and it gave me an answer. I was not expecting it to get it correct.I guess AI hallucination problem chooses when to happen. I get where you're coming from. Prompts alone often can't stop hallucinations. Improving training and data quality is key. Let's keep exploring other ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: