Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s incredibly annoying to read. So many super short sentences with the “not just X. Also Y” format. Little hooks like “The attack vector?”

“Not fancy security tools. Not expensive antivirus software. Just asking my coding assistant…”

I actually feel like AI articles are becoming easier to spot. Maybe we’re all just collectively noticing the patterns.



I'm regularly asked by coworkers why I don't run my writing through AI tools to clean it up and instead spend a time iterating over it, re-reading, perhaps with a basic spell checker and maybe grammar check.

That's because, from what I've seen to date, it'd take away my voice. And my voice -- the style in which I write -- is my value. It's the same as with art... Yes, AI tools can produce passable art, but it feels soulless and generic and bland. It lacks a voice.


It also slopifies your work in a way that's immediately obvious. I can tell with high confidence when someone at work runs their email through ChatGPT and it makes me think less of the person now that I have to waste time reading through an overly verbose email with very little substance to it when they could have just sent the prompt and saved us all the time.


Use an AI tool to summarize it. /sarcasm, kind of


I manage an employee from another country and speaks English as a second language. The way they learned English gives them a distinct speaking style that I personally find convincing, precise and engaging. I started noticing their writing losing that voice, so I asked if they were using an LLM and they were. It was a tough conversation because as a native English speaker I have it easy, so I tried to frame my side of the conversation as purely my personal observation that I could see the change in tone and missed the old one. They've modified their use of LLMs to restore their previous style, but I still wonder if I was out of line socially for saying anything. English is tough, and as a manager I have a level of authority that is there even when I think it isn't. I don't know the point, except that I'm glad you're keeping your voice.


As a non-native English speaker living in AU, I can offer my opinion in case it's helpful.

Of course I can't speak to the person you mentioned but if you said what you did with respect and courtesy then they probably would've appreciated it. I know I would have. To me, there's no problem speaking about and approaching these issues and even laughing about cultural issues, as long as it's done with respect.

I once had a manager who told me that a certain client finds the way I speak scary. When I asked why, it turns out that they're not expecting the directness in my speech manner. Which is strange to me since we were discussing implementation and requirements and directness and precision are critical and when they're not... well that's how projects fail, in my opinion. On the other hand, there were times when speaking to sales people left me dizzy from all the spin. Several sentences later and I still had no idea if they actually answered the question. I guess that client was expecting more of the latter. Extra strange since that would've made them spend more money than they have to.

Now running my own business, I have clients that thank me for my directness. Those are the ones that have had it with sales people that think doing sales is by agreeing to everything the client says and promising delivery of it all and then just walking away leaving the client with a bigger problem than the one they started with.


This was a super helpful perspective, thanks a million.


I often ask for ai to give only grammar and spelling corrections, and then only a change set I apply manually. In other words the same functionality as every word processor since…y2k?


Why not just use one of those word processors, then? It seems like you'd expend less effort (unless there's an advantage of your approach that I'm missing), since the proof-reading systems built into a Word processor have a built-in queue UI with integrated accept / reject functionality that won't randomly tweak other parts of the paragraph behind your back.


Far better at catching some types of mistakes. Word only has this many hardcoded rules past the basic grammar. LLMs operate on semantics, and pick up on errors like "the sentence is grammatically correct, but uses an obviously wrong term, given the context".


That's not the kind of thing I'd trust to a language model: I'd expect it to persuade me to change something correct to something incorrect more often than it catches a genuine error. But ymmv, I suppose.


I have definitely seen Grammarly make suggestions that are actually wrong, but I think it's generally pretty ok, and it does seem to make fewer mistakes than I normally do.

Sometimes I use incorrect grammar on purpose for rhetorical purposes, but usually I want the obvious mistakes to be cleaned up. I don't listen to it for any of its stylistic changes.


I've had good results with doing similar. My spelling and grammar have always been a challenge and, even when I put the effort into checking something, I get blind to things like repeating words or phases when I try to restructure sentences.

I sometimes also ask for justification of why I should change something which I hope, longer term, rubs off and helps me improve on my own.


Every time you let AI speak for you, it gets better at sounding like you — and you get worse at it.

That’s the trade: convenience for originality.

The more you outsource your thoughts, your words, your tone — the easier it becomes to forget how to do it yourself.

AI doesn’t steal your voice.

It just trains you to stop using it.

/a


I consider myself to be an above average writer and a great editor. I will just throw my random thoughts about something that happened at work, ask ChatGPT to keep digging deeper in my question, I will give it my opinion of what I should do. Ask it to give me the “devil’s advocate” and the “steel man opinion” and then ask it to write a blog post [1].

I then edit it for tone, get rid of some of the obvious AI tells. Make some edits for voice, etc.

Then I throw it into another season of ChatGPT and ask it does it sound “AI written”. It will usually call out some things and give me “advice”. I take the edits that sound like me.

Then I put the text through Grok, Gemini and ask it the same thing. I make more edits and keep going around until I am happy with it. By the time I’m done, it sounds like I something I would write.

You can make AI generated prose have a “voice” with careful prompting and I give it some of my writing.

Why don’t I just write it myself if I’m going through all that? It helps me get over writers block and helps me clarify my thoughts. My editing skills are better than my writing skills.

As I do it more and give it more writing samples, it is a faster process to go from bland AI to my “voice”

[1] my blog is really not for marketing. I don’t link to it anywhere and I don’t even have my name attached to it. It’s more like a public journal.


> By the time I’m done, it sounds like I something I would write.

As a writer myself, this sounds incredibly depressing to me. The way I get to something sounding like something I would write is to write it, which in turn is what makes me a writer.

What you’re doing sounds very productive for producing a text but it’s not something you’ve actually written.


Maybe he just want to summarize things. I'm writing in Spanish. Of course I won't let AI to write this very post ---even in my bad E. But there are things in my Obsidian written in Spanish, by AI. They're sounds like nothing, sometimes you need something to sound that way: informative, aseptical. But it is good to hear about you anyway, when some people thinks, or fake they think, AI can write, let's say, fiction.


I am torn, as someone who is learning Spanish and should be at a strong A1 [1] by the end of the year, I would be horrified to think about posting something in a public forum based on my Spanish speaking ability.

On the other hand, I’ve had enough conversations with Spanish speakers in Florida like at my barbershop and a local bar in a tourist area who speak limited English and I would much rather have real conversations between my broken Spanish and their broken English than listen to or read AI Slop.

[1] according to this scale, I’m past A1 into A2.1 category now. But I still feel like I’m A1

https://berlitz-istanbul.com/en/spanish-levels/


I write to communicate with myself or other people. Just like I use AI to go from I need to do $X based on my ideas and designs to I did $x. It’s not about “art” or “passion”. It’s about a paycheck


I don’t think it needs to be about art or passion. I just don’t think someone who relies entirely on AI generated text can accurately call themselves “a writer.”


I don’t call myself a writer. I call myself an employee who needs to exchange labor for money to support my addictions to food and shelter. I was writing and developing long before AI.


If you're a great editor, why do you let multiple LLMs edit for you?


When I’m writing something for work where I know the end goal - I don’t. When I’m streaming random thoughts without any coherent end goal for my blog or my internal notes on something that happened at work as a retrospective I will use it.

Just to be repeat myself, my blog isn’t for marketing, I don’t have any advertising on them, I don’t post a link to it anywhere and I have no idea if anyone besides me has ever read it since I don’t have any analytics. I don’t have my name or contact information on it


That sounds miserable, surely it's faster to just write it


I dont buy it can tell if something sounds ai. Multiple times i have given it direct ai slop writing and it could not tell it was ai written. As a matter of fact, it would insist it wasnt.

This flow sounds like what an intern did in pr reviews and it made me want to throw something out a window. Please just use your own words. They are good words and much better words than you may think.


https://chatgpt.com/share/68f065f4-f5ac-8010-81c1-faf4218e5c...

https://chatgpt.com/share/68f0666a-2bf0-8010-9d35-2ac4bdc870...

This article was dated as being written in 2020

https://chatgpt.com/share/68f06775-c570-8010-af7b-29531a22fd...

Original article

https://www.yourmembership.com/blog/tips-effective-board-mee...

I can’t share links from Gemini or Grok. But they both immediately flagged the first one as AI generated and the second most likely human.

I didn’t actually do anything here except told ChatGPT to rewrite it in the form of an article I found from an old PDF “97 Things a software engineer should know” from 2010, then ask Grok did it sound AI generated (it did), ask Grok to rewrite it to remove tell tale signs (it still kept the em dashes) and then I copied it ba k to ChatGPT.

https://chatgpt.com/share/68f06cec-3a20-8010-8178-a69695db16...

With some human editing to make it sound less douchery or better prompting, do you think you could tell?


Could I tell if the last one is AI? Absolutely. Throwing a few "damns" in there didn't convince me. And all the reworking you've done, while it makes it a little more passable, has made it arguably worse in quality. The point of the final article is so muddy. It has no central point and sprawls on and on about random nonsense.


I agree and I said as much

With some human editing to make it sound less douchery or better prompting, do you think you could tell?

In other words - I did no human editing or even played with the prompt.

For instance, I would have definitely reworded this “a solid meeting isn’t just about not screwing up the logistics. It’s a snapshot of how your team actually operate”

The “it isn’t just $x. It is $y” is something that Ai loves to do.

The larger point is AI is really good at detecting its own slop. Gemini is really good at detecting first pass AI Slop from another LLM and out of curiosity I put a few other articles I knew was written before 2022 to see if it gave false positives.


But it isnt. Why use more word when few do trick? Ai is god awful at sounding authentic and surprise surprise, authenticity is easy to sus out.

AI has a voice in writing which youd need to almost completely rewrite every word to remove at which point, why use ai?


I agree. I use Grammarly for finding outright mistakes (spelling and the like, or a misplaced comma or something), but I don't listen to any of the suggestions for writing.

I feel like when I try writing through Grammarly, it feels mechanical and really homogeneous. It's not "bad" exactly, but it sort of lacks anything interesting about it.

I dunno. I'm hardly some master writer, but I think I'm ok at writing things that interesting to read, and I feel Grammarly takes that away.


Your voice? The style in which you write? That's gold - no one can take that away from you. And honestly? You're brave for admitting that.


The thing is, ask it something right away and it'll use its own voice. Give it lots of data from your own writing through examples and extrapolations on your speech patterns and it will impersonate your voice more. It's like how it can impersonate Trump, it has lots of examples to pull from, you? it doesn't know you. LLMs needs large amount of input to give it a really good output.


Then why even do it? I already have a language model trained on the corpus of everything I've ever wrote. It sits between my two ears.


It does it faster…


I suppose that if you don't find writing enjoyable then that is a good thing.


Use it or lose it.


I said almost exactly that to a coworker a few hours ago. My writing is me, it’s who I am. But I know that is not true for everyone, and in particular non-native speakers.

I just detest that AI writing style, especially for business writing. It’s the kind of writing that leaves the reader less informed for the effort.


It's also exactly the type of writing you see on LinkedIn (yuck), so this article really goes full circle!


FTR I sometimes use AI to make my writing more "professional" because I rite narsty like

I've recently had to say "My CV has been cleaned up with AI, but there are no hallucinations/misrepresentations within it"


Hm, why do you have to say that? A CV is expected to be super polished and not necessarily consistent with the rest of your writing, right?


If I were asked a direct question, especially in a job interview, I would be truthful. That answer stops any sniping about using AI and lets me focus on my skills.


Ah, I misunderstood the parent comment as having that disclaimer on the CV itself.

I agree that if asked directly, it makes sense to talk about candidly. Hopefully an employer would be happy about someone who understands their weak spots and knows how to correctly use the tools as an aid.


Asking about AI usage in CV is pointless in my opinion. You are always responsible what reads in there. If they don’t like the writing style, then they don’t.


Interviewers directly asking whatever bothers them is fine IMHO. The alternative is keeping a negative impression when there could have been an insightful exchange, and the candidate also gets to know what to expect from the company.


If you have access to Microsoft Word, I'd customize the grammar checker settings to flag more than what is enabled by default. They have a lot of helpful rules that many are oblivious to because it's all buried deep in the preferences. Then adopt the stance of taking the green lines under advisement but ignore them if your original words suit your preference. That will get you polished up without submitting to AI editorial mundanity.


Honestly, the issue is that most people are poor writers. Even “good” professional writing, like the NY Times science section, can be so convoluted. AI writing is predictable now, but generally better than most human writing. Yet can be an irritating at the same time.


It reads like Linkedin slop, not AI slop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: