The irony is that the disclosure of “I asked ChatGPT and it says…” is done as a courtesy to let the reader be informed. Given the increasing backlash against that disclosure, people will just stop disclosing which is worse for everyone.
The only workaround is to just text as-is and call it out when it's wrong/bad, AI-generated or otherwise, as we've done before 2023.
I think it's fine to not disclose it. Like, don't you find "Sent from my iPhone" that iPhones automatically add to emails annoying? Technicalities like that don't bring anything to the conversation.
I think typically, the reason people are disclosing their usage of LLMs is that they want offload responsibility. To me it's important to see them taking responsibility for their words. You wouldn't blame Google for bad search results, would you? You can only blame the entity that you can actually influence.
That’s true. Unfortunately the ideal takeaway from that sentiment should be “don’t reply with copy pasted LLM answers”, but I know that what you’re saying will happen instead.
Exactly, it is important and courteous still to cite your resources and tools.
I find a good workaround is to just say "some very quick research of my own leads me to ...", and then summarize what ChatGPT said. Especially if you are using e.g. an LLM with search enabled, this is borderline almost literally true, but makes it clear you aren't just stating something completely on your own.
Of course, you should still actually verify the outputs. If you do, there is not much wrong with not mentioning using the LLM, since you've don't the most important thing anyway (not be lazy in your response). If you don't verify, you had better say that.
Except it isn't. It's a disclosure to say "If I'm wrong, it's not my fault".
Because if they'd actually read the output, then cross-checked it and developed some confidence in the opinion, they wouldn't put what they perceive as the most important part up front ("I used ChatGPT") - they'd put the conclusion.
It isn't this cut and dry. You can cross-check and verify, but still have blind spots (or know that the tools have biases as well), and so consider it still important to mention the LLM use up front.
Or, if you preface a comment with "I am not an expert, but...", it is not often about seeking to avoid all blame, but to simply give the reader reasonable context.
Of course, you are right, it is also sometimes just lazy dodging.
The only workaround is to just text as-is and call it out when it's wrong/bad, AI-generated or otherwise, as we've done before 2023.