Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is:

1. The machines won't "break", at best you slightly increase when they answer something with incorrect information.

2. People are starting to rely on that information, so when 'transformed" your harmless chemical are now potentially poison.

Knowing this is possible, it (again "to me") becomes highly un-ethical.



The onus to produce correct information is on the LLM producer. Even if its not poisoned information it may still be wrong. The fact that LLM producers are releasing a product that is producing information that is not verified is not a bloggers fault.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: