Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In both of your examples I could see the model becoming the default, with humans double-checking, if even that. If things go wrong as you describe, humans are pulled into the loop by the humans wronged (unless they give up before). There is a company making money with auto insurance claims: https://tractable.ai/en/products


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: