Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> ChatGPT often makes up facts.

As opposed to... Google? Your doctor? My doctor?



Absolutely as opposed to those things. With Google, if you use a reliable source like Mayo, NIH, even a WebMD, It is clearly more likely to have accurate information than something that proves even numbers are prime. Certainly all those things can be inaccurate but where in the world you think ChatGPT pattern matches it’s information from?


Exactly. ChatGPT is clearly very impressive and useful, but nothing from its output should be treated as valid or factual to any degree.

Information generated by humans will include things like transpositional errors, logical errors, popular misconceptions, and misinterpretations of data. Mistakes happen, but human mistakes are at least tethered to real thoughts/information.

On the other hand, AI will happily spin up a complete fabrication with zero basis in reality, give you as much detail as you ask for, and dress it all up in competent and authoritative-sounding prose. It will have all the style of a textbook answer, while the substance will be pure nonsense.

Still a great tool, but only with the caveat that you approach it with the mindset that it's actively set out to catch you off guard and deceive you.


> AI will happily spin up a complete fabrication with zero basis in reality, give you as much detail as you ask for, and dress it all up in competent and authoritative-sounding prose.

Sure. What makes you think a human won't?


I didn't say a human wouldn't. I said a human wouldn't typically do it by mistake.


And how hard would it be for ChatGPT to be retrained on peer reviewed medical journals? ChatMD-GPT, if you will.


The majority of articles in peer-reviewed medical journals are also false.

https://doi.org/10.1371/journal.pmed.1004085

You can't take such articles seriously unless they have been independently reproduced multiple times. So, your hypothetical "ChatMD-GPT" would have to also filter on that basis and perhaps calculate some sort of confidence level.


And it has already likely been trained on correct information and yet it produces bad results. It certainly has been trained data that explains what prime numbers and yet it produces what it produces, whereas using Google and hitting a credible source directly is more accurate and efficient.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: