As with many other things (em dashes, emojis, bullet lists, it's-not-x-it's-y constructs, triple adjectives, etc) seeing any one of them isn't a tell. Seeing all of them, or many of them in a single piece of content, is probably the tell.
When you use these tools you get a knack for what they do in "vanilla" situations. If you're doing a quick prompt, no guidance, no context and no specifics, you'll get a type of answer that checks many of the "smells" above. Getting the same over and over again gets you to a point where you can "spot" this pretty effectively.
The author did not do this. The author thought it was wonderful, read the entire thing, then on a lark (they "twigged" it) checked out the edit history. They took the lack of it as instant confirmation ("So it’s definitely AI.")
The rest of the blog is just random subjective morality wank with implications of larger implications, constructed by borrowing the central points of a series of popular articles in their entirety and adding recently popular clichés ("why should I bother reading it if you couldn't bother to write it?")
No other explanations about why this was a bad document, or this particular event at all, but lots of self-debate about how we should detect, deal with, and feel about bad documents. All documents written by LLM are assumed to be bad, and no discussion is attempted about degrees of LLM assistance.
If I used AI to write some long detailed plan, I'd end up going back and forth with it and having it remove, rewrite, rethink, and refactor multiple times. It would have an edit history, because I'd have to hold on to old drafts in case my suggested improvements turned out not to be improvements.
The weirdest thing about the article is that it's about the burden of "verification," but it thinks that what people should be verifying is that LLMs had no part in what they've received. The discussion I've had about "verification" when it comes to LLMs is the verification that the content is not buggy garbage filled with inhuman mistakes. I don't care if it's LLM-created or assisted, other than a lot of people aren't reading and debugging their LLM code, and LLMs are dumb. I'm not hunting for em-dashes.
-----
edit: my 2¢; if you use LLMs to write something, you basically found it. If you send it to me, I want to read your review of it i.e. where you think it might have problems and why you think it would help me. I also want to hear about your process for determining those things.
People are confusing problems with low-effort contributors with problems with LLMs. The problem with low-effort contributors is that what they did with the LLM was low-effort and isn't saving you any work. You can also spend 5 minutes with the LLM. If you get some good LLM output that you think is worth showing to me, and you think it would take significant effort for me to get it myself, give me the prompts. That's the work you did, and there's nothing wrong with being proud of it.
You may be missing the point. The author’s feeling about the plan he was sent were predicated on an assumption that he thought was safe— that his co-worker had written the document that he claimed to have “put together.”
If you order a meal at a restaurant and later discover that the chicken you ate was recycled from another diner’s table (waste not want not!) you would likely be outraged. It doesn’t matter if it tasted good.
As soon as you tell me you used AI to produce something, you force me review it carefully, unless your reputation for excellent review of your own is well established. Which it probably isn’t— because you are the kind of guy who uses AI to do his work.
Or the tell that the guy who usually writes fairly succinctly suddenly dumps five thousand words with all of the details that most people wouldn't bother to write down.
It would be interesting to see the history where the whole document is dumped in the file at once, but then edits and corrections are applied top to bottom to that document. Using AI isn't so much the problem as trusting it blindly.
Dumping the entire file into google docs and then editing and corrections applied top to bottom is exactly my normal workflow. I do my writing in vim, paste it into google docs, and then do a final editing pass while fixing the formatting.
> the whole document is dumped in the file at once, but then edits and corrections are applied top to bottom to that document
This also happens if one first writes in an editor without spellchecking, then pastes into the Google Doc (or HN text box) that does have spellchecking.
I have seen a number of write ups where I think the only logical explanation is that they are not conveying what literally happened but spinning narrative to express their point.
There was an article the other day where the writer said something along the lines of it suddenly occurred to them that others might read content they had access to. They described thenselves as a security researcher. I couldn't imagine a security researcher having that occur to them, I would think that it is a concept continually present in their concept of what data is. I am not a security researcher and it certainly something I'm fairly constently aware of.
Similarly I'm not convinced the "shouldn't this plan be better" question is in good faith either. Perhaps it just betrays a fundamental misunderstanding of the operation being performed by a model, but my intuition is that they never expected it to be very good and are feigning surprise that it is not.
A world full of AI generated content and a world where we trust what we see seems to be mutually exclusive. I expect the default for a number of people going forward (myself included) will be extreme suspicion when presented with new images/videos/documents.
It probably did, but they didn't feel the need to fully explain why they were confident it was AI generated, since that's not the point of the article.