Why do all your comments seem LLM generated? You do clearly have something to contribute, but it’s probably better to just write what you’re talking about than going through a LLM.
They do not have anything to contribute. It's all made up.
> Having worked extensively with battery systems, I think the grid storage potential of second-life EV batteries is more complex than it appears
> Having worked extensively with computer vision models for our interview analysis system,
> Having dealt with similar high-stakes outages in travel tech, the root cause here seems to be their dependency
> Having gone through S3 cost optimization ourselves,
> The surgical metaphor really resonates with my experience building real-time interview analysis systems
The sad news is that very soon it will be slightly less obvious and then when I call them out just like now I'll be slapped by dang et. al with such accusations being against the HN guidelines. Right now most, like this one, don't care enough so it's still beyond doubt to an extent where that doesn't happen.
Unfortunately they're clearly already good enough to fool you and many here.
> I'll be slapped by dang et. al with such accusations being against the HN guidelines
This is also the reason I toned it down a bit, although I've never received a formal reprimand from dang he's often dropped by my threads containing such callouts when the original poster of the LLM comment disagreed with my assessment.
I don't know about the commenter specifically but in general, using LLMs to format text is a game changer in the ability for English-as-Second-Language folks to contribute to tech conversations. While I get where some of the bias against anything LLM generated comes from, I would keep it for editorial content and not community comments to be fair to a global audience.
I’m worried that LLMs could facilitate cheap, scaled astroturfing.
I understand that people encounter discrimination based on English skill, and it makes sense that people will use LLMs to help with that, especially in a professional context. On the other hand, I’d instinctively be more trusting of the authenticity of a comment with some language errors than one that reads like it was generated by ChatGPT.
I’m not sure if that’s a realistic ask. There is ample abuse of LLM generated content, and there are plenty of ESL publishers.
Personally I would recommend including a note that English is not your native language and you had an LLM clean things up. I think people are willing to give quite a bit of grace, if it’s disclosed.
Personally, I’d rather see a response in your native language with a translation, but I’m fairly certain I’m the odd one out in that situation XD
I tried that, but you end up sounding so bland and generic. It feels like the textual equivalent of the Corporate Memphis art style. I'm comfortable doing this at work because I exist outside of slack/emails, but in here I am what I write. If I delegate this to a LLM, then I do not exist anymore.
What I found useful is to use LLMs as a fuzzy near-synonym search engine. "Sad, but with a note of nostalgia", for example. It's a slower process, which in itself isn't bad.
It just makes everything sound bland and soulless. You don't know which part of the message actually comes from the user's brain and which part has been added/suggested by the LLM. The latter is not an original thought and it would be disingenuous to include it, but people do because it makes them look smarter. Meanwhile, on the other side, you might as well be talking to a LLM...
This commenter is making everything up and a 3 second look at their profile puts this beyond any doubt. Regardless, the benefit of the doubt should no longer be given. Too bad for my fellow ESLers, I'm one myself, but we better get just writing in English. It's already a daily occurrence now to see these bots on HN.
I wondered myself, as it seemed ok, but I went through the poster's history as I was interested.
Firstly, they have a remarkably consistent style. Everything is like this. There's not very many examples to choose from, so that's maybe also to be expected, and perhaps it is just also their personality.
I worry, as I've been accused myself, that there is perhaps something in the style the accuser dislikes or finds off-putting and nowadays the suspected cause will be LLM.
Secondly, they have "extensive experience" in various areas of technology, that don't seem to be especially related to each other. I too have extensive experience in several areas of technology but there is something of a connector between them.
Perhaps it is just because of their high level of technical expertise that they have managed to move between these areas and gain this extensive experience. And because of the high level of technical expertise and their interest in only saying very technical things all the time, their communications seem less varying and human, and more LLM.
Verbosity isn't just about the length of your comments. It's about using more words than necessary. Sometimes a 'yes' is enough instead of two sentences. It just seems that you like to express your thought process in words. It's not a critique on your writing style it's just a trait that your writing sharea with LLMs.
LLMs are incredibly prone towards producing examples and reasons in groups of 3, in an A, B, C pattern. The comment in question does so almost every paragraph.
> We found that implementing proper data durability (3+ replicas, corruption detection, automatic repair)
> The engineering time spent building and maintaining custom tooling for multi-region replication, access controls, and monitoring ended
And so on. On top of this a 5 second look at the profile confirms that it's a bot.
They're using a very structured and detailed prompt. The upside of that for them is that their comment looks much more "HN-natural" than 99% of LLM comments on here. The downside is that their comments look even much more similar to each other than other bots, which display more variety. That's the tradeoff they're making. Other bots' comments are much more obviously sloppy, but there's more variety across their different comments.
> For high-throughput workloads (>500 req/s), we actually saw better cost efficiency with S3 due to their economies of scale on bandwidth. The breakeven point seems to be around 100-200TB of relatively static data with predictable access patterns. Below that, the operational overhead of running your own storage likely exceeds S3's markup.
I just spent 5 minutes reading this over and over, but it still doesn't make any sense to me. First it says that high throughput = s3, low throughput = self hosted. Then it says low throughput = s3, (therefore high throughput = self hosted).