Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

True. There’s a small bonus that trying to explain the issue to the llm may sometimes be essentially rubber ducking, and that can lead to insights. I feel most of the time the llm can give erroneous output that still might trigger some thinking on a different direction, and sometimes I’m inclined to think it’s helping me more than it actually is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: