This is not a standalone article but a section from Butterick's book, "Typography for Lawyers", which is hosted in full on the website. The book is an opinionated style manual, and many alternatives are described in nearby sections.
Once I realized that some people expect and are happy for you to jump in with unprompted thoughts or stories, it became easier for me to be intentional about doing so.
I think I'm a lot better now than when I was younger at adapting to a wide range of conversational styles, mostly just from paying more attention to that dynamic.
Do you feel like your conversational toolbox has evolved over time? :)
Ha, yes a bit! Not interrupting or talking over someone was drilled into me in childhood, but exposure to different family dynamics helped me learn that it's not a universal value, and that I can adapt and adjust my communication styles for different groups and situations.
That's still a bit of a struggle to push myself to "speak out of turn" and ensure my voice is included in a discussion.
My objection to AI comments is not that they are AI per se, but they are noise. If people are sneaky enough that they start making valuable AI comments, well that is great.
I think people are just presuming that others are regurgitating AI pablum regardless.
People are seeing AI / LLMs everywhere — swinging at ghosts — and declaring that everyone are bots that are recycling LLM output. While the "this is what AI says..." posts are obnoxious (and a parallel to the equally boorish lmgtfy nonsense), not far behind are the endless "this sounds like AI" type cynical jeering. People need to display how world-weary and jaded they are, expressing their malcontent with the rise of AI.
And yes, I used an em dash above. I've always been a heavy user of the punctuation (being a scattered-brain with lots of parenthetical asides and little ability to self-edit) but suddenly now it makes my comments bot-like and AI-suspect.
I've been downvoted before for making this obvious, painfully true observation, but HNers, and people in general, are much less capable at sniffing out AI content than they think they are. Everyone has confirmation-biased themselves into thinking they've got a unique gift, when really they are no better than rolling dice.
Thing is, the comments that sound "AI" generated but aren't have about as much value as the ones that really are.
Tbh the comments in the topic shouldn't be completely banned. As someone else said, they have a place for example when comparing LLM output or various prompts giving different hallucinations.
But most of them are just reputation chasing by posting a summary of something that is usually below the level of HN discussion.
>the comments that sound "AI" generated but aren't have about as much value as the ones that really are
When "sounds AI generated" is in the eye of the beholder, this is an utterly worthless differentiation. I mean, it's actually a rather ironic comment given that I just pointed out that people are hilariously bad at determining if something is AI generated, and at this point people making such declarations are usually announcing their own ignorance, or alternately they're pathetically trying to prejudice other readers.
People now simply declare opinions they disagree with as "AI", in the same way that people think people with contrary positions can't possibly be real and must be bots, NPCs, shills, and so on. It's all incredibly boring.
I mean verbose for no good reasons, not contributing meaningfully to the discussion in any way.
Just like those StackOverflow answers - before "AI" - that came in 30 seconds on any question and just regurgitated in a "helpful" sounding way whatever tutorial the poster could find first that looked even remotely related to the question.
"Content" where the target is to trick someone into an upvote instead of actually caring about the discussion.
Because the risk is lower. They will give you suspicious citations and you can manually check those for false positives. If some false citation pass, it was still a net gain.
But this isn't an ambiguous area of law. The statute is pretty clear in the text here - that the EB1-A criteria are necessary but not sufficient. That's what the step1 (necessary) and step2 (sufficient) boil down to. You can litigate on what qualifies as necessary if the agency is doing something weird, but ultimately it is a subjective evaluation. The court isn't going to adjudicate on the merits, USCIS is.
But it is quite interesting and especially learning about the security problems of the document() function (described @ 19:40-25:38) made me feel more convinced that removing XSLT is a good decision.
reply