Hacker Newsnew | past | comments | ask | show | jobs | submit | brushfoot's commentslogin

The edit history of the announcement is quite a ride:

> [2025-11-27T02:10:07Z] it’s abundantly clear that the talented folks who used to work on the product have moved on to bigger and better things, with the remaining losers eager to inflict some kind of bloated, buggy JavaScript framework on us in the name of progress [1]

> [2025-11-27T14:04:47Z] it’s abundantly clear that the talented folks who used to work on the product have moved on to bigger and better things, with the remaining rookies eager to inflict some kind of bloated, buggy JavaScript framework on us in the name of progress [2]

> [2025-11-28T09:21:12Z] it’s abundantly clear that the engineering excellence that created GitHub’s success is no longer driving it [3]

---

1: https://web.archive.org/web/20251127021007/https://ziglang.o...

2: https://web.archive.org/web/20251127140447/https://ziglang.o...

3: https://web.archive.org/web/20251128092112/https://ziglang.o...


On the previous HN article, I recall many a comment talking about how they should change this, leave the politics/negative juju out because it was a bad look for the Zig community.

It would appear they listened to that feedback, swallowed their ego/pride and did what was best for the Zig community with these edits. I commend them for their actions in doing what's best for the community at the cost of some personal mea culpa edits.


I often find we don't appreciate enough people accepting their failures and changing their mind. For some reason I see the opposite: people respecting those who "stick to their guns" or double down when something is clearly wrong. As you say, the context matters and these edits seem to be learning from the feedback rather than saving face since the sentiment stands, just in a less needlessly targeted way.

Never understood that either. If someone was wrong and bad, and now they're trying to do right and good, we need to celebrate that. Not just because that's awesome in itself, but also to give the opportunity and incentives for others in the future to do better.

If everyone is always bad regardless if they're trying to change, what incentives would they have from changing at all? It doesn't make any sense.


The incentive is less about morals and very much about self-preservation.

With online mobs, when the target shows any sort of regret there is blood in the water and the sharks feast. It sometimes turns into a very public form of struggle session for the person under scrutiny. Besides avoiding the faux pas in the first place, one well-tested mitigation is to be absolutely unapologetic and wait for the storm to blow over.


For what it’s worth, I found the original announcement childish and unnecessarily negative towards people working on the product (against their CoC which I found hilarious and hypocritical), and I find it refreshing that they updated the post to phrase their criticism much more professional.

I think that real honesty works well as long as you have the character to stand up for yourself. An unflinchingly honest self-assessment which shows that you understand the error and rectified it is almost always the path to take.

Acknowledgement of mistakes do not invoke much of a mob reaction unless there is wavering, self-pity, or appeals for leniency. Self-preservation should be assumed and not set as a goal -- once you appear to be doing anything that can be thought of as covering up or minimizing or blaming others, the mob will latch on to that and you get no consideration from then on.


The other part of the equation is not letting bad people get away with doing bad stuff if they do good stuff after that. The return on doing bad stuff, then good stuff has to be greater than the return on only doing bad stuff, but less than the return on only doing good stuff. It should increase over time the more you don't do bad stuff again.

I agree with the sentiment (people changing their minds), but the flipside to that is people pleasing. Someone who capitulates under even the slightest pressure is not much better than the person who is set in their ways.

The trouble there, of course, is that the motivation for changing (or not changing) one's mind is not always clear, and it's easy to score points from spinning it one way or another.


Engineers are not exactly famous for people-pleasing. Maybe management, but engineering? Maybe some fresh junior?

I'm not convinced that the existence of a low-probability event justifies normalizing the regular occurrence of a much more likely (and negative) event, like a belligerent engineer throwing a fit in a design meeting. I'd go as far as to say I'm open to more people-pleasers in engineering.

Also, fwiw, if you want to know why someone changed their mind, you can just ask them and see how you feel about the answer. If someone changes their mind at the drop of a hat, my guess is that their original position was not a strongly held one.


You and I obviously have different experiences because I encounter belligerent engineers much less frequently than ones who are enthusiastic to do what they can, and those who don't want to rock the boat when challenged.

I thought I made a fairly innocuous point, I don't even think I was talking about engineers specifically.


You can’t read people’s mind, so when in doubt, assume good intention.

It’s not particularly relevant (to me as a random non-zig affiliated HN reader) why they right their wrongs, as long as they did it, I find it positive (at least better than if they had left the monkey comments in the post).


mind reading tech is here - a reality. look up radiomyography and EEG deciphering neural networks. you shouldn't though, not without a permission

Well, it's not like it's a simple black and white situation, universally applicable to every debate in human history. Sometimes it is relatively better to be open-minded and able to change own opinion. Sometimes it is relatively better to keep pushing a point if it is rational and/or morally correct.

The reason why the latter stance is often popularized and cheered is because it is often harder to do, especially in the adverse conditions, when not changing your opinion has a direct cost of money or time or sanity or in rare cases even freedom. Usually it involves small human group or individual against a faceless corporation, making it even harder. Of course we should respect people standing against corporation.

PS: this is not applicable if they are "clearly wrong" of course.


Consider the plight of a policy-maker who changes their stance on some issue. They may have changed their mind in light of new information, or evolved their position as a result of deeper reflection, personal experience, or maturation. Opponents will accuse them of "waffling" or "flip-flopping", indicating a lack of reliability or principles (if not straight-up bribery). Elected officials are responsible for expressing the will of the people they represent, so if they're elected largely by proponents of issue X, it is arguably a betrayal of sorts for them to be as dynamic as private citizens.

This is tangential to the original topic of insider trading, where the corruption is structural / systemic -- akin to how "conflict of interest" objectively describes a scenario, not an individual's behavior.


The demonization of "flip-flopping" is so stupid. Bro, I want my politicians to change their minds when new facts arise or when public sentiment changes. The last thing we need is more dogmatic my-way-or-the-highway politicians that refuse to change their minds about anything.

The underlying issues are:

1) People don't really vote based on logic and sound reasoning. They vote based on what sounds right to them. If they're unhappy with something, they vote for somebody who also claims to be unhappy about it, regardless if he has any actual solutions.

2) Even for the minority who wants to vote based on sound principles, it's very hard to push information back to them. If the politician changes his mind, he has to explain it to his voters. Are there really platforms which allow in-depth conversations in political debates?

Every university classroom has a whiteboard and a projector. Because you need to draw graphs, diagrams, etc. You need to explain the general structure and then focus on the details without losing track of the whole.

Is there a single country where politicians use either when talking to each other or voters?


While I agree with you, I find it hard to argue against the view that politicians are elected for the views they held during their campaign. They may change their mind after being elected, but their constituents that voted for them will not all change their mind simultaneously. To the ones that don't change their mind, it does appear to be a betrayal of their principles. A rational politician would not want to gain that kind of reputation out of pure self-interest.

I would be much more inclined to continue voting for a politician who could explain their policy position as it changes in an open and sensible way. Politicians putting on a speech that sounds truthful and honest and like a discussion is happening between adults is so rare - it seems that very few people want that. I do though.

Reminds me of Stephen Colbert's roast of George W. Bush at the 2006 White House Correspondents' Dinner:

> The greatest thing about this man is he's steady. You know where he stands. He believes the same thing Wednesday that he believed on Monday, no matter what happened Tuesday. Events can change; this man's beliefs never will.


Its a thing with (online) culture - no matter what you do you're going to ruffle some feathers.

If no one hates what you are doing chances are you're not doing anything really


Well, it was comparing people with monkeys and calling them losers. It was a straightforward personal insult. Writing something online in a blog is like making a public announcement on a market with 100s listening. No one except someone who wants to inflame would use such words in the real world. People just forget that they are speaking in the public. And in that case not only for himself but also for others.

I was more referring to the practice of "self-censoring" and editing what one wrote after publishing.

Of course you are right and it was distasteful but I'm sure they genuinely felt that way when they first wrote it.


There was no mind change, just a change in published words from a true expression of his mind into a more bland corporate speak

Some would say if you always stick to your guns and double down, you might wind up President.

For me it depends heavily on context.

> I often find we don't appreciate enough people accepting their failures and changing their mind.

Because this plays into a weird flaw in cognition that people have. When people become leaders because they are assholes and they are wrong, then after the wind blows the other way they see the light and do a mea culpa, there is always a certain segment that says that they're even more worthy to be a leader because they have the ability to change. They yell at the people who were always right that they are dogmatic and ask "why should people change their minds if they will be treated like this?"

If one can't see what's wrong with this toy scenario that I've strawmanned here, that's a problem. The only reason we ever cared about this person is because they were loud and wrong about everything. Now, we are expected to be proud of them because they are right, and make sure that they don't lose any status or position for admitting that. This becomes a new reason for the people who were previously attacking the people who were right to continue to attack the people who were right, who are also now officially dogmatic puritans whose problem is that they weren't being right correctly.

This is a social phenomenon, not a personality flaw in these leaders. People can be wrong and then right. People can not care either way and latch onto a trend for attention or profit, and follow it where it goes. I don't think either of these things are in and of themselves morally problematic. The problem is that there are people who are simply following individual personalities and repeating what they say, change their minds when that personality changes their mind, and whose primary aim is to attack anyone who is criticizing that personality. They don't really care about the issue in question (and usually don't know much about it), they're simply protecting that personality like a family member.

This, again, doesn't matter when the subject is stupid, like some aesthetic or consumer thing He used to hate the new Batman movies but now he says that he misunderstood them; who cares. But when the subject is a real life or death thing, or involves serious damage to people's lives and careers, it's poisonous when a vocal minority becomes dedicated to this personality worship.

It's so common that there now seems to be a pipeline of born-agains in front of everything, giving their opinion. Sir, you were a satanist until three years ago.


The flaw in your argument is referring to the people who are “always right.”

Those people don’t exist. Which is exactly why the ability to change your opinion when presented with new information is a critical quality in a good leader.


“People who were right all along about this issue” rather than “people who are always right about everything all the time”.

> The only reason we ever cared about this person is because they were loud and wrong about everything

Except we cared about Andrew Kelley because he was right about quite a lot of things (eg the zig design).


Came here to write that. Let us recognize that he accepted our feedback and improved. This is good.

I think it's because when people do a 180 due to public pressure, it's hard to know to what degree they changed their mind and to what degree they are just lying about what is on their mind.

Toning down aggressive phrasing is not "doing a 180", calling the change from "only losers left at GitHub" to "the engineering excellence has left" lying seems disingenuous.

I was responding to the general sentiment of:

> I often find we don't appreciate enough people accepting their failures and changing their mind. For some reason I see the opposite: people respecting those who "stick to their guns" or double down when something is clearly wrong.

Not this specific situation.


As I see it, someone who "listened to that feedback, swallowed their ego/pride" would include a note at the end of the post about the edits. Admitting you were wrong requires not erasing the evidence of what you said.

(He did post a kind of vague apology in https://ziggit.dev/t/migrating-from-github-to-codeberg-zig-p..., but it's ambiguous enough that anyone who was offended is free to read it as either retracting the offending accusation, or not. This is plausibly the best available alternative for survival in the current social-media landscape, because it's at best useless to apologize to a mob that's performatively offended on behalf of people they don't personally know, and usually counterproductive because it marks you as a vulnerable victim, but the best available alternative might still tend to weaken the kind of integrity we're talking about rather than strengthen it.)


> Admitting you were wrong requires not erasing the evidence of what you said.

I don't think there's really an obligation to announce to newcomers, "hey, an earlier version of this post was overly inflammatory." But you should be forthright about your mistake to people who confront you about it, which is what's happening in the forum thread you linked. I think this is all fine.


If those newcomers are following a link from someone who was commenting on the earlier version, I think there is.

Perhaps you should frame it differently if you speak for a company and provide criticism on a public platform, but mean tweets are often far less insulting that some business decisions customers and developers are subjected to.

I think developers here are probably perfectly innocent about these changes. The product mangers have to push for this integration or get replaced. This has been a theme at Microsoft for quite a while.


I don't see the need for a note in this case because what was there wasn't wrong, there's plenty of evidence that supports it. It's just that the tone they used that was inadequate and very rude for no reason, so they edited it to be more polite, it doesn't seem a correction or retraction.

No evidence was erased as the evidence exists.

You mean, on a third-party website that currently happens to have a capture of the page outside of the Zig team's control, one which can go down at any time?

The site is open source and the commits are still there. No need to be so dramatic.

https://github.com/ziglang/www.ziglang.org/commit/c8d046b288...


Oh, thanks, I thought watwut meant archive.org. Is this diff also linkable on codeberg?

The reality is he wasn't wrong, he just didn't care to deal with the tone policing concern trolls of HN and elsewhere.

That is absolutely a viable reading of what he wrote, yes.

There is utility in indicating how surprised / concerned you are at a certain process or event. We can flatten out all communication and boil everything down to an extremely neutral "up", "down", and "nailed it to exacting precision".

I find the fact that this painting has been hung crooked by 0.00001º: down

I find torture and mass murder: down

Clearly this is a ridiculous state of affairs. There's more gradations available than this.

Possibly coloured by my dutch culture: I think this rewrite is terrible. The original sentence was vastly superior, though I think the first rewrite (newbies to rookies) was an improvement.

The zig team is alarmed, and finds this state of affairs highly noteworthy and would like to communicate this more emotional, gut instincty sense in their words.

There's a reason humans invent colourful language and epithets. They always do, in all languages. Because it's useful!

And this rewrite takes it out. That's not actually a good thing. The fact that evidently the internet is so culturally USA-ised that any slightly colourful language is instantly taken as a personal affront and that in turn completely derails the entire debate into a pointless fight over etiquitte and whether something is 'appropriate' is fucking childish. I wish it wasn't so.

In human communication, the US is somewhat notorious in how flattened its emotional range is of interaction amongst friendly folk. One can bring anthropology into it if one must: Loads of folks from vastly different backgrounds all moving to a vast expanse of land? Given that cultural misunderstanding is extremely likely and the cost of such a misunderstanding is disastrously high, best plaster a massive smile on your face and be as diplomatic as you can be!

Consider as a practical example: Linus Torvalds' many famed communications. "NVidia? Fuck you!" was good. It made clear, in a very, very pithy way, that Linus wasn't just holding a negative opinion about the quality and behaviour of the nvidia gfx driver team at the time, but that this negative opinion was universal across a broad range of concerns and extremely so. It caused a shakeup where one was needed. All in 3 little words.

(Possibly the fact that the internet in general is even more incapable of dealing with colourful language is not necessarily the fault of USification of the internet: The internet is a lot like early US, at least in the sense that the risk of cultural misunderstanding is far higher than in face to face communications on most places on the planet).


If I could upvote you, I would. I have never liked the mob of people that think we should all be super diplomatic corpospeakers who hedge everything and who think that not doing so is "offensive" or "unprofessional". I definitely didn't think anything was wrong with the original sentences or word usage, because it wasn't aimed at any specific individual with the deliberate intent of being offensive, but was aimed at Microsoft itself. And even if the intent was to be offensive, well, on the internet your always going to offend someone. You could be super nice and say all the right words and someone would still find a way to be offended by it. And were these circumstances ordinary, I would call out the word usage as well, because it would be uncalled for. But given all the evidence that the original points at, it's rather hard to say that GitHub didn't deserve it. And it is also rather difficult for me to see how this wasn't the time or place for such language. Sometimes the only way to get your point across is to be "unprofessional" (whatever that means these days).

  There's a reason humans invent colourful language and epithets. They always
  do, in all languages. Because it's useful!

  I have never liked the mob of people that think we should all be super
  diplomatic corpospeakers who hedge everything and who think that not doing
  so is "offensive" or "unprofessional".
Agreed with you and OP. More to the point, the final rewrite leaves out any meaningful why. Perhaps they could/should be more diplomatic about their distaste, but leaving it out all together leaves quite the elephant in the room.

Then again the front end rewrite (which GitHub was crowing about for quite a while) and doubling down on AI nonsense got me to stop using GH for personal projects and to stop contributing to projects hosted on GH.


Thanks for pointing this out! I looked at the edit history and without looking at the timestamps assumed it was in reverse chronological order. Seeing that I was wrong brought a smile to my face.

I appreciate that Andrew and the other Zig team members are really passionate about their project, their goals, and the ideals behind those goals. I was dismayed by the recent news of outbursts which do a lot to undermine their goals. That they’re listening to feedback and trying to take the high road (despite feeling a lot of frustration with the direction industry is taking) should be commended.


Zig is the language that was intentionally made to fail and error out on windows carriage returns instead of parsing them like every language ever made. They made a version for windows and then made it not work with every windows text editor. Their answer was to 'get better text editors' or 'make a preprocessing program to strip out carriage returns' or 'don't use windows' (they had a windows executable).

This is not a group with community or pragmatism from the start.


In all seriousness, this comment really makes me want to try out Zig!

You want a language that releases a compiler on a specific platform then intentionally breaks it for everyone on something trivial just to troll and irritate them?

I like a language that aggressively discourages writing code in Notepad on Windows.

Every text editor on windows adds a carriage return by default.

You haven't given any actual reasons this makes sense, if you don't like windows why would you be using it in the first place? Why would you care what text editor people use?

Why would it be ok to release something on a platform just to annoy your own users?


Last I checked even Apple migrated to LF. Perhaps it's time for Windows to stop being the odd man out? Regardless:

  not work with every windows text editor
Last I checked both Visual Studio Code and Notepad++ can both make line endings configurable. That covers a plurality of use cases. Even the built-in Notepad supports using CR or LF only for going on eight years now.

Perhaps it's time for Windows to stop being the odd man out?

This is the same nonsense rationalizations that zig gave. Windows is the odd man out. If you want to release something on windows you match an extra byte on the ends of lines. It isn't that hard and even the simplest toy language does it. It's just part of line splitting, it isn't even something that happens at the language stage.

Last I checked both Visual Studio Code and Notepad++ can both make line endings configurable.

Last time I checked it was totally unnecessary because no other language releases for a platform and tries to punish their users. Options like that are to make files match while being worked on for different platforms, not so that a compiler doesn't try to punish and troll its users for using it.


  If you want to release something on windows you match an extra byte on the ends of lines
Did I miss some sort of formal directive from Microsoft or is this just outrage that someone dared do something not up to your standards?

  try to punish and troll its users for using it
Nobody's being punished. Configuring your dev environment is something people do for every language. Let's add some perspective here: we're talking about a single runtime option for your text editor of choice. BFD. More to the point, why isn't your editor or IDE properly supporting Zig files?

Did I miss some sort of formal directive from Microsoft or is this just outrage that someone dared do something not up to your standards?

It's just the way it works, it isn't my standards, it is literally any piece of software that detects line breaks.

Nobody's being punished. Configuring your dev environment is something people do for every language.

No one has to configure around this issue because it is trivially solved and dealt with by every piece of software on the planet. It takes longer to write an error message than it does it just split a line correctly.

Let's add some perspective here: we're talking about a single runtime option for your text editor of choice.

Let's add some perspective here: they intentionally broke their own software to upset 72% of their potential users.

More to the point, why isn't your editor or IDE properly supporting Zig files?

No one has to care about zig, it's a niche language that doesn't care about its users, it's irrelevant except for hacker news threads.

If some language started demanding you save all your text files with carriage returns or will will error out, what would you think?

You sound like a lawyer grasping at straws instead of someone with a reasonable perspective that wouldn't be hypocritical when flipped around.


  You sound like a lawyer grasping at straws instead of 
  someone with a reasonable perspective that wouldn't be
  hypocritical when flipped around.
What lawyer speak? You're throwing a temper tantrum over a situation entirely of your own making. That there's a Windows port of Zig and sufficient users to justify its continued existence pretty clearly shows your hyperbole isn't representative in the way you claim.

Were I in a situation where I needed to work with something not expecting LF line termination I'd either configure my dev environment appropriately or find tools that do what I want.

  No one has to care about zig, it's a niche language that doesn't
  care about its users, it's irrelevant except for hacker news threads.
So when it's your tool selection nobody has to care? But when someone else makes a decision you disagree with it's the end of the world? Gotcha. Don't check that checkbox. Stay mad, bro.

it's the end of the world?

You didn't confront anything I wrote and instead just made up something no one said. All I did say was that zig is intentionally hostile to their own users, which they are.

If you could actually deal with what I wrote I think you would have done it already.


From where I'm sitting it seems like it's time for you to take a break from this thread.

I guess we're at the "claim the other person is upset to avoid what they said" (and edit posts) part of the conversation.

No, we're at the you're making an emotional argument backed by hyperbole and I'm moving on stage. Look at your language: punished, trolled, "any piece of software", "every piece of software", "it takes longer to write an error message than it does it just split a line correctly", "lawyer grasping at straws".

You're personally aggrieved because someone dared release a compiler that runs on windows but doesn't accept non-standard line endings. I've already addressed what you've said but you've responded with a bunch of handwaving because you're merely making an emotional argument.

If you'd like me at address what you wrote again:

  it takes longer to write an error message than it does it just split a line correctly
It takes longer to write your tantrums than to configure your development environment correctly.

You're personally aggrieved

Nope

doesn't accept non-standard line endings

It is standard on windows.

I've already addressed what you've said

No you haven't. You haven't addressed anything I've said, like legitimate reasons for doing it or what you would think if other languages did the same thing on other OSs.

you're merely making an emotional argument.

Seems like projection. I wrote things that actually happened.

It takes longer to write your tantrums

I know it would be convenient to frame things this way but if you could confront what I'm saying you would have done it with all the chances you had.

Why won't you respond to what I'm saying? I think it's because there is no real defense and you know that.


> This is the same nonsense rationalizations that zig gave.

I'm guessing you didn't live through the early days of webdev when you had to jump through ridiculous hoops just to support IE. At least back then there was the excuse that IE had the lions share of the market and many corporate users.

The industry wide acceptance of supporting IE majorly held back what websites/apps were capable of being. Around 2012ish (right as I was leaving webdev) more and more major teams started to stop supporting earlier broken versions of IE (this was largely empowered by the rising popularity of Chrome). This had a major impact on improving the state of web applications, and also got MS to seriously improve their web browser. Moves like this one by the Zig team are the only way you're going to push Microsoft to improve the situation.

Now you may claim "but Windows is 70% of users!" but this issue doesn't impact anyone wanting to run Zig applications, only those writing them. If you're an inexperienced dev that's super curious about Zig, this type of error might be another good nudge that maybe Windows isn't the OS you want to be working on.


Now you may claim "but Windows is 70% of users!" but this issue doesn't impact anyone wanting to run Zig applications, only those writing them.

No one is confused about how a compiler works. Those people being intentionally trolled are called your users when you make a language.

If you're an inexperienced dev that's super curious about Zig, this type of error might be another good nudge that maybe Windows isn't the OS you want to be working on.m

Then why did they make a windows version? Any normal person just sees that they shouldn't invest time in a language intentionally annoying it's own users for trying it out.

You still haven't come up with any explanation, your whole tangent about internet explorer has no relevance. There isn't one part of your comment that makes sense. Why would you even care about other people's OS and text editors? What kind of fanaticism would lead to wanting to use a language because they intentionally annoy users of something you aren't even involved in?

The whole thing is basically a case of "this things doesn't stand on any merits, I've just decided that I don't like certain people and they did something to upset them even though they are really just shooting themselves in the foot".


that's amateur level anti-windows user

much better to put a colon in a filename, or call part of your toolchain "aux.exe"

https://help.interfaceware.com/v6/windows-reserved-file-name...

works like a treat


Related: aux.c in the kennel source https://bugzilla.kernel.org/show_bug.cgi?id=68981

At least, this change will make source files not portable, which is obviously bad.

Use a real operating system and problem solved?

Is this directed at Zig? They're the ones that released a windows executable.

> It would appear they listened to that feedback, swallowed their ego/pride and did what was best for the Zig community with these edits.

Indeed. The article even links to it.

https://ziggit.dev/t/migrating-from-github-to-codeberg-zig-p...


>It would appear they listened to that feedback, swallowed their ego/pride and did what was best for the Zig community with these edits

They sugarcoated the truth to a friendlier but less accurate soundbite is what they did.


I did prefer that honest line about bloated, buggy Javascript framework. Otherwise might as well ask an LLM to spit out a sanitized apology text for your change in provider. Just like ten thousand identical others copied from a playbook. Allow your eyes to comfortably glaze over with zero retention.

Have people already forgotten that the ReactJS port made github slow ? https://news.ycombinator.com/item?id=44799861

The revised, politically-correct, sanitized re-framing that you apparently insist on does not convey this very important point of information.

We have freedom of speech for a reason - blunt honesty conveys important information. Passive language does not.


Perhaps the final edit should have included the complaint about 'buggy bloated Javascript' as that's a very substantive issue - and now I don't know if they changed that as 'tone' or because they decided that technical criticism wasn't correct, and there are other issues?

I wish they edited it to be more extreme. Go full Torvalds like the good 'ol days before every opinion was "political".

Well, no, they still acted based on the original ego/pride, they just changed blogpost to look different.

I mean, reason of "we don't want to be tied with direction MS takes" is good enough, not sure why they felt need to invent reasons and nitpick some near irrelevant things just to excuse their actions


Yep, agreed. I think this would have been the better reason too, but anyway - I also don't think it is so important either way.

The big problem still remains: corporations control WAY too much in general.


[flagged]


   It just seems like someone got upset about something they can't articulate
Dunno, they don't play well to the HN crowd I thought it was pretty clear what their pain points with GitHub were.

ICE, Actions, and Microsoft, not a single complaint about git itself. All I see is they have CI issues coupled with dumbest anti AI policy that is impossible for them to enforce. Giving up your donations and losing half your community doesn't seem like an intelligent move when all you had to do is update your CI.

> ICE, Actions, and Microsoft, not a single complaint about git itself

Codeberg is also a Git-based project host. It doesn't even support other repo types. Why would you be expecting the latter?

If a project announcement or article headline says someone/something is quitting or leaving GitHub, it makes a lot of sense to assume that their issue is with GitHub (and in this case, it would be an assumption they'd be right about).


I was pointing out how ironic it was for them to move from git SaaS to git SaaS while having no issues with git on the git SaaS they're moving away from. Make sense?

> Make sense?

Only if they use it purely as a git SaaS which they don't, it's also an issue tracker and discussion forum. Even PRs aren't strictly a git concept. Given they use all those things and given they're against having AI features built into them, it does not seem ironic to me at all.


That's not ironic.

If they had trouble with Git on GitHub, and then left GitHub for Codeberg, where they also have to use Git, then that would be very strange.

Instead, they had trouble with GitHub, so they left GitHub, which makes perfect sense.


You're conflating GitHub the platform with GitHub the bundle of services. CI is optional, swappable, not unique to GitHub. Sponsorship infrastructure and discoverability are not. The complaints target the optional layer. The migration sacrifices the sticky layer. That's backwards, and ironic, with the intention of being performative. It's almost like selling your car because a tire lost some air, lol.

This is an insane comment right from the thesis.

  ICE, Actions, and Microsoft, not a single complaint about git itself.

  all you had to do is update your CI
Updating your CI only addresses one of the issues you raised, and you forgot about the front-end complaints which also wouldn't be addressed by "updating your CI".

It's all faux outrage, they didn't give a shit about ICE or MSFT until they could use it as a rage bait prop.

Imagine being a slave to any SCM UI when cli tools and desktop clients have existed for ages not to mention integration into nearly every IDE. Also, what they're describing "random" workflows is classic ci build machine went offline and came back later.

Regardless, best of luck to them, hopefully they don't run into any more "monkeys", that would be terrible for them.


That's kind of what they are doing - the move is 'updating their CI' to Codeberg Actions which is presumably more reliable. All the git workflows stay the same.

Eh, it looks like they want to hide that they call people monkeys and losers.

If they would own up to it and say sorry, then your point stands. But that's not what happened here.


> I completely agree with this. I performed really poorly on this axis. I’m sorry to the Zig community for that. I’ll take my L and get back to working on std.Io and the rest of the roadmap. [1]

[1] https://ziggit.dev/t/migrating-from-github-to-codeberg-zig-p...


> I do feel bad for hurting your feelings but I also strongly believe that you should not be proud of working for Microsoft, and particularly on GitHub for the last 5 years. I truly am sorry but you need to be called out.

Crocodile tears.

https://hachyderm.io/@andrewrk@mastodon.social/1156234452984...


Thank you for sharing this. :(

They should know that crap software is rarely intentional as they make it out to be in the initial version of the text, what you get is what they are able to build in the environment they are in (that matters too). Capability and environment.

I think the Reddit mobile website team might be the exception to that. What they make is a particular brand of unusable and from what I remember there is evidence of them talking about how that was intentional.

Reddit is trying to steer everyone into using their mobile app, which schlorps up as much personal data as it possibly can. I normally don’t go in for the whole mustache twirling thing, but given their previous actions in shutting down all third party apps, I’m fine in this case with accusing them of outright malice.

I think they recently banned people from creating their own API keys, which is a thing that people were doing to enter into their third party apps to bypass the ban - every copy of the app was registered as a single-user app. Now if you want to make any app or bot, you either screen-scrape, steal an API key, or get the approval of Reddit management.

Kelly’s indignant attitude and commitment to “engineering excellence” suggest a bright future for Zig. It’s good to see the leader of a technical project get angry about mediocrity.

[..] in a product not people. Insulting people is never a solution.

Sometimes people need to be shocked awake. Reality is harsh, and gentle language doesn't change that.

I've spent time in restaurant kitchens around chefs that believed "some people need to be shocked awake".

The people that got yelled at didn't do markedly better after getting yelled at, but they sure had a worse attitude towards their peers and chefs.

None of the chefs I talked to about it had anything better than "that's how it was when I started in kitchens" as actual justification.


The methods for influencing results within an organization exist on a spectrum, and failing to adequately utilize the breadth of that spectrum is always counter-productive.

If you want to measure the language used by the productivity of the desired outcome. I'd encourage you to survey the ratio of comments talking about the problems with github's very broken CI and UX, with how many people expressed an objection to the language and words used in the announcement. Failure to convey ideas with tact and respect, is demonstrably more counter productive.

I assume you'll choose to dismiss those who object as fragile birds... but then you don't really care about the productivity towards the goal then do you? You just want to be ok with being mean because it doesn't bother you.


Why do you consider that a useful metric? Hit dogs holler, after all.

> Why do you consider that a useful metric? Hit dogs holler, after all.

you do...

> The methods for influencing results within an organization exist on a spectrum, and failing to adequately utilize the breadth of that spectrum is always counter-productive.

Or did you have a different expectation for result in mind? The one you thought would be counter-productive without insults.

My assumption was that ark wanted to put support behind codeberg, and encourage others to take a critical look at how bad github has become, and to consider other options. Not rally additional support and defense of github's actions.


I do about what?

I haven’t actually used harsh language with anyone so I’m not sure what your point is. I have been on HN long enough to know that expressions of strong negative emotion are punished here. That says absolutely nothing about the effectiveness of different methods of influence within an organization.

I think if people are rallying to defend GitHub due to some language that ruffled their feathers and not objective technical merit then they have completely lost the plot as engineers.

As far as Andrew’s goals, I think he has been pretty successful within the framework of the attention economy.


I'm talking about the ideas, threads and conversations that are occupying the head space of others.

> then they have completely lost the plot as engineers.

I think most people who would call themselves software engineers have lost the plot of engineering.

That applies equally to those who are blind to the fact that engineering only exists to create stuff for humans. Most engineers are ignorant to the ability to consider the humans they're supposedly build for.

The point is to make shit better, not worse, and not more inhuman.


If you are hitting the dog unprovoked don’t be shocked if it bites you.

It can be true, that a person needs a wake-up call, but it can also be true that the person(s) doing the "shocking" are sadistic, abusive, or psychopaths.

You’re not mining coal, get real. Either use efficient techniques to make people do the intellectual work necessary to achieve whatever goal you have in mind, or you’re just deluding yourself thinking you’re some kind of “reality expert” while being an asshole, meaning they might still do it, but it would be despite your leadership, not because of it.

Why does intellectual work imply that people doing poor work need to be treated like fragile little birds?

Intellectual work requires a bit of creativity (across all the domains I can think of), abuse, of any kind increases stress, stress decreases creativity, ability to problem solve, and resilience (or the ability to endure the difficulty of solving hard problems).

But even if that wasn't true. There's a significant difference between confronting the harshness of reality. And behaving in a way that makes reality suck more. Every human deserves to be treated with dignity, and a base level of respect.

Suggesting that someone is fragile and weak, because they object to being insulted, or object to the careless and needless stripping of dignity and humanity from people is a wild take.


I dont think porting everything over to React...making the site slower, bloated, & buggier is "creativity".

I agree that people should be treated with dignity...but groupthink & herd mentality often strips people of their humanity.

So the criticism is really about culture & abstract attractors...not the individual people who often act rationally within the context of the system.


I started working on srctree 2 years ago because of how awful github has become. I don't think there's much creativity in this trend line... But the question was; "why is insulting people doing intellectual work bad". Not, "do you think the changes at github are creative", but I do think that the changes require a bit of intellectual work, and that no matter how shitty github has become, it's unreasonable to attack people when unprovoked.

Can you only provide clear and direct feedback on poor work by insulting people?

No but I won’t rule it out for the incorrigible

Ok, but that’s still not effective as a leadership course of action. Calling people names might make you feel like a big man inside, but that’s it, it won’t accomplish anything, that’s only for your personal benefit, not the project, not the product and definitely not the team.

Actually if you completely rule out the possibility of harshness then you are giving license to let yourself be walked over and for standards to drop to zero. It might make you feel like a big enlightened man inside to do so, but the proper application of firmness and pressure is absolutely effective in leadership.

Derision is legitimate way to change behavior when other avenues fail.

A reasonable person that's acting maliciously can be reasoned to stop their behavior.

An unreasonable person that's acting in good faith cannot be reasoned to stop their behavior because they are stupid.

If after attempts to reason with the unreasonable fail, it is not an insult or ad hominem attack to explain the person is acting stupidly.


>Insulting people is never a solution.

That can not be absolutely true.


Nothing is absolutely true, but in this case definitely

This news story was read by investors and leadership inside of Microsoft.

That wouldn't have happened if they hadn't derided whatever idiot decision makers thought it was acceptable in the first place.


Anger is a mind killer. Build software out of love. Love for engineering, innovation, creation, and love of working with people who feel the same way.

Anger contributed directly to the start of the free software movement:

https://www.gnu.org/philosophy/rms-nyu-2001-transcript.txt


A righteous, passionate anger can be indistinguishable from love. Having and committing to something worth fighting over, however bloody the battles may be, can make a life just a meaningful as one that practices disciplined quiescence, reflection, acceptance, etc. Love is what it is because it must paradoxically accept its opposites; love can be anger, anger can be love. The real mind killer is a pat moralism!

Thus spake zarathustra etc etc..


> A righteous, passionate anger can be indistinguishable from love. ... love can be anger, anger can be love.

These are just word games. Blurring and mixing what we mean with different words. To say what? Passion takes different forms and can be a hell of a motivator? Nobody disputes that.

There's clearly a difference between anger and love. GP was addressing that difference and recommended to focus on the healthier of the two. That's good advice.


Is there? I am not playing games!

What is the anger that arises from you when one you care for is hurt because of some violence or injustice? Is that not an expression of love?

What is that particular anger you can feel towards a romantic life partner of many years? One that can only be based in an already profound intimacy, in some deep fidelity? Don't you feel that same love you have always felt for them, but in a different color?

What is the anger you feel when you see grand injustices? Hate crimes, genocides, crimes against freedom.. Isn't that something like a humanistic love?

To make love simply the "healthier" option is to totally destroy it! It makes it, like, at best a pragmatic maxim and at worst a weird kind of imperative (we should be healthy after all..). But love is not an imperative, it's a (beautiful, amazing, natural) condition. And it is not always "healthy," not always without anger, but always "good" in that you can't go wrong following it.


Of course there is a difference between anger and love. Either one can be present without the other, and that they can sometimes mix and play off each other does not change that they are different.

You are playing around with words to pretend they are the same. That's very poetic and dramatic, but I hope you realize that love is not the same as anger, and that neither truly requires the other.

If done right, love can eat anger. If done wrong, anger will eat love, and much more. These outcomes are not the same. That's were the game gets serious, and that's why I'm being such an ass about what you wrote.


Sure ok. I do think its ultimately just semantic. That is: if you start from the definition of love as a state we can, like, get into or not, if it is more something we do rather than experience, then sure, the state of anger and the state of love are different, and the latter definitely seems more preferable. I only get "dramatic" here insofar as I feel like thats just kind of an unsatisfying definition! Like, love songs are sometimes sad songs too. I just reject this psychological/behavioral starting point and offer that what we call "love" should be a broader, deeper, messier thing is all.

But this is really heady woowoo stuff at this point, and its quite ok to disagree on stuff of this sort! I understand you will probably continue to dismiss all this as sophistry or playing with words or whatever, but know either way that I do recognize and respect your point here! It can probably be seen as a choice: love can be a desirable state or a dramatic raison d'etre. For the former, you're probably a pretty happy monk/stoic type, for the other, you're more like the classic Romantic, the artist, etc.


"Love", the word, can stand for so many more or less related concepts. Is it something we feel? Is it something we do? We're always picking a nebulous definition, a different one each of us, different ones at different times.

"Love" is suprisingly ill-defined for the power it has. Maybe that's even part of its power: being a vague word to refer to powerful things within us to try to give them meaning, and a handle to hold them by, which then of course is also a handle that has a hold on us.

That's why, I'd say, it's important to be careful with the other words we place around that word "love", because they can illuminate or conceal, sharpen or blur, all the while gripping people by that handle.

I appreciate what you're doing to promote a better understanding of that word here and give it some context you were missing from the post you originally reacted to. Of course, "love" may mean different things to a Romantic poet or a monk or a teenager or a long-married couple; none of them are wrong, none takes away from the other, and all with some pretty messy edges, probably.

The poster you reacted to used "love" and "anger" to refer to opposing tendencies and motivations within us. You pointed out that "love" and "anger" can overlap. That's right, of course, I don't think anyone would say otherwise. I just think it's not what OP was talking about when they used these words. They used a different, albeit related, concept of love from yours, for a different purpose, relying on the difference between their chosen form of love and anger to make their point. You pointed out that things can be seen differently; that's fair.

What I do object to, though, is the conflation of anger and love. I understand what you're getting at, but I think it's important to keep these things separate and distinguishable, because it is not good to mistake anger for love, or excuse anger with love.

It may seem as if they are inextricably mixed, nothing we can do about it! But I think this is, please excuse the direct language, a little lazy and a little cheap. It's quick to use a few words to stir up some emotions and romantic notions that are sleeping in our hearts. But it opens the way to let anger reign in the name or even guise of love, which is, morals aside, not gonna lead anywhere nice at all. Romantic? Yes. Good? Bad? Ugly? We all have choices, and we should consider them.


I would contend that anger is the only thing that drives any kind of progress. An abundance of love means accepting, adjusting, and forgiving, which are antithetical to systemic change.

You need that middle-finger-to-everyone, "let me show you how it's really done" energy to build anything meaningful. Pretty much all the great builders I can think of in tech history are/were deeply angry people.


Constant anger surely is. But it is also a damn good spark at times. Just can't let it fester.

to quote something I said a day ago about AI spotting in the posts of other people:

https://news.ycombinator.com/item?id=46114083

"I think that writing style is more LinkedIn than LLM, the style of people who might get slapped down if they wrote something individual.

Much of the world has agreed to sound like machines."


AI witch-hunts are definitely a problem. The only tell you can actually rely on is when the AI says something so incredibly stupid that it not only fails to understand what it is talking about but the very meaning of words themselves.

Eg,metaphors that make no sense or fail to contribute any meaningful insight or extrenely cliched phrases ("it was a dark and stormy night...") used seriously rather than for self-deprecating humor.

My favorite example of an AI tell was a youtube video about serial killers i was listening to for background noise which started one of its sentences with "but what at first seemed to be an innocent night of harmless serial murder quickly turned to something sinister."


which is unfortunate, because pre-AI, "but what at first seemed to be an innocent night of harmless serial murder quickly turned to something sinister." would just be a funny bit of writing.

Straight from a noir detective pulp, even.

This has always been the case in the "corporate/professional" world imo.

It's just much easier now for "laypeople" to also adjust their style to this. My prediction is people will get quickly tired of it (as evidenced by your comment)


Question: would you go to a public place and call a person who is listening to you a loser or a monkey with the risk of getting your face smashed in?

Companies do public announcement with the risk of getting sued left and right. Normal people chose careful words in public. In the Internet it seems different rules apply in public. Laypeople are not adjusting to corporate talk, laypeople are more and more aware of the public of the Internet and behave accordingly (most are, like in real life, mute)


Also

> More importantly, Actions is created by monkeys ...

vs

> Most importantly, Actions has inexcusable bugs ...

I commend the author for correcting their mistakes. However, IMHO, an acknowledgement instead of just a silent edit would have been better.

Anyway, each to their own, and I'm happy for the Zig community.


He acknowledged. Linked in the article.

He hid the comments he made and apologized to the Zig community for his behavior. He never apologized to the people he harmed (the 'losers' at GitHub in this context).

[flagged]


You must be fun at parties.

Harm is damage to health. He damaged their health.

Thanks


That was insulting and I am thus harmed.

Edit: I upvoted you because I love parties.


"bloated, buggy Javascript framework"

Companies with heaps of cash are (over)paying "software engineers" to create and maintain it

Millions of people, unable to disable it, are "active users"

When I use Github servers I only use them to download source code, as zipballs or tarballs. I don't run any JS

The local forward proxy skips the redirects when downloading

   http-request set-path %[path,regsub(/blob/,/raw/,g)] if { hdr(host) github.com }
   http-request set-path %[path,regsub(/releases/tag/,/releases/expanded_assets/,g)] if { hdr(host) github.com }
Works for me

Whatever the wording, what they are writing truly shows on Github. There are many things wrong with its code display ... All of which used to work fine or were not added in this buggy state in the first place.

Code folding is buggy. Have some functions that have inner functions or other foldable stuff like classes with methods and inside the method maybe some inner function? It will only show folding buttons sporadically, seemingly without pattern.

Also standard text editing stuff like "double click and drag" no longer properly works without issues/has weird effects and behavior. The inspection of identifiers interferes with being able to properly select text.

The issue search is stupid too, often doesn't find the things one searches for.

You must be logged in to search properly too.

Most of the functionality is tied to running that JavaScript.

In short, it shows typical signs of a platform that is more and more JavaScriptianized with bloated frameworks making things work half-assed and not properly tested for sane standard behavior.

But there is more. Their silly AI bots closing issues. "State bot". "Dependabot". All trash or half thought out annoying (mis-)features. Then recently I read here on HN, that apparently a project maintainer can edit another person's post! This reeks of typical Microsoft issues with permissions to do things and not properly thinking such a thing through. Someone internally must be pushing for all this crap.


Do people actually use GitHub to inspect code? I figure for anything that's not a 1-second lookup, I might as well just do at least a shallow clone of the repo, and look through it with my own personally-tailored editor instead.

Not to say their implementation doesn't suck. I just wouldn't know because even a non-buggy one would probably still be a subpar experience.


For my own PRs I like reading the changes again in the UI.

It is almost like getting someone else to proofread it since my mind isn’t as good at filling in the blanks like it is when looking at the code in the editor I wrote it in.


I do the same.

At least he edited it to something more palatable. I vastly prefer someone who can admit to making a mistake and amending what they said to someone who doubles down. The latter attitude has become far too normalised in the last few years.

Is political correctness necessary to have a thriving community / open source project?

Linux seems to be doing fine.

I wouldn't personally care either way but it is non-obvious to me that the first version would actually hurt the community.


How you treat others says everything about you and nothing about the other person.

In this case, the unnecessary insults detract from the otherwise important message, and reflect poorly on Zig. They were right to edit it.


People who are unhappy with Zig are free to use something else and not engage with the community.

If he kept his comments within the Zig community and didn't go all over social media denigrating GH employees, you'd be right.

You're allowed to have negative opinions of GitHub employees on social media.

Cool.

On the other hand some notable open source leaders seem to be abrasive assholes. Linus, Theo, DHH, just three examples who come to mind. I think if you have a clear vision of what you want your project to be then being agressively dismissive of ideas that don't further that vision is necessary just to keep the noise to a low roar.

Yeah, bad behaviors of others does not excuse yours.

Even Linus doesn’t act that way anymore. Here’s him a few years ago:

> This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for.

> Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry. The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.

> I am going to take time off and get some assistance on how to understand people's emotions and respond appropriately.

He took time off and he’s better now. What you call “political correctness” is what I and others call “basic professionalism”. It took Linus 25 years to understand that. I can only hope that the people who hero worshipped him and adopted a similar attitude can also mature.



If you’ll notice, he called the code garbage, not the author. Judging by how bad the code was, I think this interaction was fine. This actually shows the progress Linus made in improving himself.

> And sending a big pull request the day before the merge window closes in the hope that I'm too busy to care is not a winning strategy.

I wish I could say this.

But unfortunately delaying your big PR until it's affecting schedule is a good way to dodge review.


But you got to give it to him, he does seem to be really good at catching deficiensies early that may accumulate to become serious bugs or security vulnerabilities in the future. Sure, being an asshole is not ok, but being assertive is a must for a person in his position.

I’m very much a Linus defender; the kernel is more important than people’s feelings and his approach has maintained a high level of quality.

>Is political correctness necessary to have a thriving community / open source project?

Not at all, but this reads like childishness rather than political correctness.


What does any of this have to do with political correctness?

Not being a dick is quite a different thing than political correctness.

Makes me wonder how much to the mass strife and confusion of the internet is simply down to people not knowing what the words they use mean?


> Makes me wonder how much to the mass strife and confusion of the internet is simply down to people not knowing what the words they use mean?

Or being intentionally misled about them. People who enjoy being awful in various ways have a vested interest in reframing the opposition as "political correctness" in order to make it easier to dismiss or ridicule. The vast majority of usage of the term "political correctness" is in dismissing or ridiculing it.


It has everything to do with political correctness. Honest, blunt language is now de-valued in favor of passive, sanitized, AI-slop language that no longer conveys important information. The revised post forgot to mention the critical point of the bloated, buggy Javascript framework because it would offend someone here.

Prefer a blunt, honest dick over a passive, polite liar anyday.


Hmm I don’t think any of the revisions are about being PC but rather not making juvenile comments. Linus has definitely made a lot of harsh inflammatory comments to others, I don’t think it’s the right thing to do and shows his character but at the same time for me at least it comes across as a smart pompous jerk who says things in the wrong way but at least usually has some kernel of a point.

The Zig comments come off has highly immature, maybe because they are comments made to unknown people, calling folks losers or monkeys just crosses some line to me. Telling someone to stfu is not great but calling groups of people monkeys feels worse.


Calling the devs of Actions "monkeys" has nothing to do about being un-PC or not. It's just plain rude and deeply insulting. It has no place in an a public announcement such as this.

Also, Torvalds was rightfully called out on his public behaviour and he's corrected himself.


Linus famously was quite strict and cursed quite a bit when somebody pissed him off with stupidity.

He's not exactly a role model when it comes to communication.

GitHub can suck my ass, I think this is the most suitable feedback to them

I've spent more than a month trying to delete my account on GitHub, still couldn't do it


Perhaps he should be. This idea that we should tolerate terrible things and only respond to them politely seems to produce bad outcomes, for some mysterious reason.

Any analysis of Github's functionality that begins and ends with blaming individuals and their competency is deeply mistaken while being insulting. Anyone who has ever worked at a large company knows exactly how hard it is for top performers to make changes and it's not difficult because the other people are stupid. At least in my experience, almost everyone holding this "they must be stupid" opinion knows very little about how large organizations make decisions and knows very little about how incentives at different levels of an org chart leads to suboptimal decisions and results. I would agree with you that being overly polite helps no one, but being correct does, and what they initially wrote isn't even right and it's also insulting. There's no value in that.

But should you care about MS's internals?

Product is useless, you move along. Save your compassion for those actually needing it.


Because people would rather Microsoft fixed it than move.

Moving is painful but I'm sure they didn't move without asking/waiting for MS to fix it.

IDK being able to produce a good product in a corpo environment sure sounds like a competency issue.

> how hard it is for top performers to make change

then you're not a top performer anymore?

seems pretty straightforward

> they must be stupid

one can be not stupid and still not competent


I am not convinced of this. Being rude and insulting someone’s intelligence is rarely a good trait. Linus got away with it due to the unique circumstances: leader of an incredibly popular open source project and a gatekeeper to a lot of access to it.

My argument against how he handles things has always been that while it may seem effective, we do not know how much more effective he would be if he did not curse people out for being dumb fucks.

And it doesn’t seem like this is a requirement for the job: lots of other project leaders treat others with courtesy and respect and it doesn’t seem to cause issues.

The reality is that it is easy to wish more people were verbally abusive to others when it isn’t directed at you. But soon as you are on the receiving end of it, especially as a volunteer, there is a greater than not chance that you will be less likely to want to continue contributing.


I think this is a good way to put it and I agree with it. Linus is a jerk and I would never want to work with him. Doubly so with zig maintainers who call other groups of people losers or monkeys. Shows a clear lack of maturity and ability to think.

Eh. Linus has a long history of abusive behavior towards other Linux contributors but also apparently apologized for it and started amending his ways. The Zig person I do not know by reputation, let alone in person. One post that he later chose to amend based on feedback is not enough for me to pass that kind of judgement. If anything, the fact that he updated it shows the opposite of lack of maturity. Adults can get frustrated. What they do with it is what matters.

Adults don’t call people losers or monkeys in social media. I am not passing judgement, it is simply not acceptable.

Really? You can’t think of any circumstances when it would be appropriate?

More to the point, if someone does it once and then stops, should we exclude this person from society forever?

Remember that only the Siths deal in absolutes.


Zero clue what your point is so please help me understand.

I was agreeing with your stance and adding my own anecdote that it’s a turnoff with the way those posts were originally formatted. Not people I would want to work with. If you do that’s fine. This is not star wars and simply my own choice as it’s everyone else.

I also cannot think of a time in my adult life I wanted to call out a group of people as losers or monkeys i n public.


My point is that Linus and the Zig guy are in different categories in my mind. I think it is a bit naive to lump them into the same category.

I would definitely classify the tiki torch wielding white nationalists as losers publicly, for example. In fact I have a hard time thinking of a better term for them. It could also apply to the fairly famous liar and criminal, the disgraced Congressman George Santos. Or any person who decides to flash kids at a playground, or beats his wife and children.

I think the Zig guy was a little over-dramatic with his initial post. He did change his mind, so in my book that's better than not. Linus did too, just after many years of bad behavior. My point is that your replies were painting the world with only black and white and there is a lot of gray area in between. Sometimes public shame is a valid way to do discourse. Often times it isn't. But it's not a "always" or "never" thing.


I did not realize we were lumping Microsoft engineers alongside white nationalists and pedos. Sure folks like that I can see people using descriptions like that.

We were not, or at least I was not.

> I also cannot think of a time in my adult life I wanted to call out a group of people as losers or monkeys i n public.

I guess that makes this your first time:

> Sure folks like that I can see people using descriptions like that.

All in all I think we generally agree that being respectful is better than being rude. And that some people who do not have respect also do not deserve respect. Shall we just leave it at that?


Then stop replying if you want to leave it at that? I have only agreed with your original statement and then you keep questioning my opinion. You are trying to pick over my words for no reason. Note I said I can see people using that language. I did not say myself. And of course why would I even think about pedos in the context of rude comments made to an unknown group of Microsoft engineers.

My opinion, I have no desire to work with people that write comments calling other engineers monkeys or losers. I have seen that behavior before and it’s not people I like to work with.


The problem with that is always people.

Because one person is judging that "terribleness" before being entitled to flame, changes to that person influence their ability to objectively make that assessment.

Say, when their project becomes popular, they gain more power and fame, and suddenly their self-image is different.

Hence it usually being a more community-encouraging approach to keep discussions technical without vitriol.

Flaming is unnecessarily disruptive, not least because it gives other (probably not as talented) folks a license to also put their worst impulses to text.


It is politeness, not political correctness.

He represented his community with insulting words to the world. In higher ranks of IT it is all about communication. With his lack of proper words he showed these leaders, who decide about the adoption of Zig, that they do not want to communicate with him/the Zig community.

As a project/tech leader he is in the business of communications. He recognized this. See link in the article.


there's a big gulf between being politically correct and not being a jerk. In this case the community reps can present their concern, motivation and decision without insulting people. It's also not a smart or valid comment; give me any organization over 100 people and I can find something deeply flawed that it hase produced or a very bad decision. Do I then tag everybody who currently works for that organization as "a brain-dead idiot" or similar?

> "eager to inflict"

Eager to do what? If it sucks it sucks, but that's a very childish way to frame it, no one did anything on purpose or out of spite. That kind of silliness hurts the image of the project. But bad translation I suppose.


One can avoid being asshole even if it is not strictly speaking necessary. In fact, if you are an asshole when it is not necessary, then you are an asshole.

Not calling other software engineers 'losers' is not about political correctness. They're "losers" because they take their product on a path you don't like? Come on. Linus can be emotional in his posts because Linux is his "child".

That's only here, he has been doubling down on Mastodon

https://mastodon.social/@andrewrk


that attitude has and continues to approach a entire bloodless coup of the largest economy on the planet.

The normalization, in fact, has been quite successful. The entire silicon valley has tacitly approved of it.

You act like people arn't being rewarded for this type of behavior.


They didn't make any comment on effectiveness.

That's crazy! He should've left the original.

Honestly, why do so many people, especially in the western hemisphere, act so shocked when somebody speaks their mind openly?

To me this kind of communication says it comes from a real person who has real experiences, not the marketing department, and is understandably angry at the people who make his life worse. And it's natural to insult those people. Insults are a signal, not noise. They signal something is wrong and people should pay attention to it.

I hear criticisms about being unprofessional and the like. So what? I don't wanna live in a world where everything everyone says is supposed to be filtered to match some arbitrary restrictions made up be people who more often than not can't do the work themselves.

Almost all of the actually competent people I personally know speak like this.

They can't stand those dragging us down through incompetence. They get angry when something that should work doesn't. They are driven by quality and will not be silent when it's lacking. If somebody fucked up, they will tell them they fucked up and have to fix it.

And I much prefer that approach.


I say this as someone who has been cautioning about Microsoft's ownership of GitHub for years now... but the Zig community has been high drama lately. I thought the Rust community had done themselves a disservice with their high tolerance of drama, but lately Zig seems to me to be more drama than even Rust.

I was saddened to see how they ganged up to bully the author of the Zig book. The book author, as far as I could tell, seems like a possibly immature teenager. But to have a whole community gang up on you with pitch forks because they have a suspicion you might use AI... that was gross to watch.

I was already turned off by the constant Zig spam approach to marketing. But now that we're getting pitchfork mobs and ranty anti-AI diatribes it just seems like a community sustaining itself on negative energy. I think they can possibly still turn it around but it might involve cleaning house or instituting better rules for contributors.


> seems like a possibly immature teenager.

What makes you say that? Couldn’t it be an immature adult?

> because they have a suspicion you might use AI

Was that the reason? From what I remember (which could definitely be incomplete information) the complaint was that they were clearly using AI while claiming no AI had been used, stole code from another project while claiming it was their own, refused to add credit when a PR for that was made, tried to claim a namespace on open-vsx…

At a certain point, that starts to look outright malicious. It’s one thing to not know “the rules” but be willing to fix your mistakes when they are pointed out. It’s an entirely different thing to lie, obfuscate, and double down on bad attitude.


I just want to point out that even if you are correct, as a Zig outsider, none of this is obvious. The situation just looks bad.

I’m a Zig outsider. I gathered the context from reading the conversation around it, most of it posted to HN. Which is why I also pointed out I may have incomplete information.

If one looks past the immediate surface, which is a prerequisite to form an informed opinion, Zigbook is the one who clearly looks bad. The website is no longer up, even, now showing a DMCA notice.


The way these sorts of things look to outsiders depends on the set of facts that are presented to those outsiders.

Choosing to focus on the existence of drama and bullying without delving into the underlying reason why there was such a negative reaction in the first place is kind of part and parcel to that.

At best it's the removal of context necessary to understand the dynamics at play, at worst it's a lie of omission.


The claims of AI use were unsubstantiated and pure conjecture, which was pointed out by people who understand language, including me. Now it appears that the community has used an MIT attribution violation to make the Zigbook author a victim of DMCA abuse.

That doesn't look great to me. It doesn't look like a community I would encourage others to participate in.

> tried to claim a namespace on open-vsx

It seems reasonable for the zigbook namespace to belong to the zigbook author. That's generally how the namespaces work right? https://github.com/search?q=repo%3Aeclipse%2Fopenvsx+namespa... https://github.com/eclipse/openvsx/wiki/Namespace-Access. IMO, this up there with the "but they were interested in crypto!" argument. The zigbook author was doing normal software engineer stuff, but somehow the community tries to twist it into something nefarious. The nefariousness is never stated because it's obviously absurd, but there's the clear attempt to imply wrongdoing. Unfortunately that just makes the community look as if they're trying hard to prosecute an innocent person in the court of public opinion.

> At a certain point, that starts to look outright malicious.

Malicious means "having the nature of or resulting from malice; deliberately harmful; spiteful". The Zig community looks malicious in this instance to me. Like you, I don't have complete information. But from the information I have the community response looked malicious, punitive, harassing and arguably defamatory. I don't think I've ever seen anything like it in any open source community.

Again, prior to the MIT attribution claim there was no evidence the author of Zigbook had done anything at all wrong. Among other things, there was no evidence they had lied about the use of AI. Malicious and erroneous accusations of AI use happen frequently these days, including here on HN.

Judging by the strength of the reaction, the flimsiness of the claims and the willingness to abuse legal force against the zigbook author, my hunch is that there is some other reason zigbook was controversial that isn't yet publicly known. Given the timing it possibly has to do with Anthropic's acquisition of Bun.


> The claims of AI use were unsubstantiated and pure conjecture

It seemed that way to me at the start too, but it quickly became apparent. Even the submitter thought so after going through the git history.

https://news.ycombinator.com/item?id=45952436

> It seems reasonable for the zigbook namespace to belong to the zigbook author. That's generally how the namespaces work right?

Yes. Bad actors try to give themselves legitimacy by acquiring as many domains and namespaces as quickly and as soon as they can with as little work as possible. The amount of domains they bought raised flags for me.

> IMO, this up there with the "but they were interested in crypto!" argument.

No idea what you’re talking about. Was the Zigbook author interested in cryptocurrency and criticised for it?

> The nefariousness is never stated because it's obviously absurd, but there's the clear attempt to imply wrongdoing.

That’s not true. It was stated repeatedly and explicitly.

https://zigtools.org/blog/zigbook-plagiarizing-playground/

Them stealing code, claiming it as their own, refusing to give attribution and editing third-party comments to make it seem the author is saying they are “autistic and sperging” is OK with you?

https://news.ycombinator.com/item?id=46095338

You really see nothing wrong with that and think criticising such behaviour is flimsy and absurd?

> I don't think I've ever seen anything like it in any open source community.

I’m certainly not excusing bad behaviour, but this wouldn’t even fall into the top 100 toxic behaviours in open-source. Plenty of examples online and submitted to HN over the years.

> Malicious and erroneous accusations of AI use happen frequently these days, including here on HN.

I know. I’m constantly arguing against it especially when I see someone using the em-dash as the sole argument. I initially pushed back against the flimsy claims in the Zigbook submission, but quickly the evidence started mounting and I retracted it.

> Given the timing it possibly has to do with Anthropic's acquisition of Bun.

I don’t buy it. The announcement of the acquisition happened after.


I think if you take a step back and try to fight against confirmation bias you'll see that the arguments you're making are very weak.

You are also moving the goal posts. You started with it was sketchy to claim a namespace now you're moving to it's sketchy to own domains. Of course people are going to buy variants on their domains.

This is easily in the top 5 most toxic moments in open source, and off the top of my head seems like #1. For all you know this is some kid in a country with a terrible job market trying to create a resource for the community and get their name out there. And the Zig community tried to ruin his life because they whipped themselves into a frenzy and convinced themselves there were secret signs that an AI might have been used at some point.

I've never seen an open source community gang up like that to bully someone based on absolutely no evidence of any wrong doing except forgetting to include an attribution for 22 lines of code. That's the sort of issue that happens all the time in open source and this is the first time I've seen it be used to try to really hurt someone and make them personally suffer. The intentional cruelty and the group of stronger people deliberately picking on a weaker person is what makes it far worse to me than the many other issues in open source of people behaving impolitely.

This is an in-group telling outsiders they're not welcome and, not only that, if we don't like you we'll hurt you.

And yes there have been repeated mentions of their interest in crypto, including in this thread.


> You are also moving the goal posts. You started with it was sketchy to claim a namespace now you're moving to it's sketchy to own domains.

Please don’t distort my words. That is a bad faith argument. I never claimed it was “sketchy to claim a namespace”, I listed the grievances other people made. That’s what “From what I remember (…) the complaint was” means. When I mentioned the domains, that was something which looked fishy to me. There’s no incongruence or goal post moving there. Please argue in good faith.

> For all you know this is some kid in a country with a terrible job market trying to create a resource for the community and get their name out there.

And for all you know, it’s not. Heck, for all I know it could be you. Either way it doesn’t excuse the bad behaviour, which is plenty and documented. All you have in defence is speculation which even if true wouldn’t justify anything.

You may not have seen this as I added the context after posting, so I’ll repeat it here:

> Them stealing code, claiming it as their own, refusing to give attribution and editing third-party comments to make it seem the author is saying they are “autistic and sperging” is OK with you?

> https://news.ycombinator.com/item?id=46095338

> You really see nothing wrong with that and think criticising such behaviour is flimsy and absurd?

Please answer that part. Is that OK with you? Do you think that is fine and excusable? Do you think that’s a prime example of someone “trying to create a resource for the community”? Is that not toxic behaviour?

Criticise the Zig community all you want, but pay attention to the person you’re so fervently defending too.


> I was saddened to see how they ganged up to bully the author of the Zig book. The book author, as far as I could tell, seems like a possibly immature teenager. But to have a whole community gang up on you with pitch forks because they have a suspicion you might use AI... that was gross to watch.

Your assumption is woefully incorrect. People were annoyed, when the explicit and repeated lie that the AI generated site he released which was mostly written by AI, was claimed to be AI free. But annoyed isn't why he was met with the condemnation he received.

In addition to the repeated lies, there's the long history of this account of typosquatting various groups, many, many crypto projects, the number of cursor/getcursor accounts, the license violation and copying code without credit from an existing community group (with a reputation for expending a lot of effort, just to help other zig users), the abusive and personal attack editing the PR asking, for nothing but crediting the source of the code he tried to steal. All the while asking for donations for the work he copied from others.

All of that punctuated by the the fact he seems to have plans to typo squat Zig users given he controls the `zigglang` account on github. None of this can reasonable be considered just a simple mistake on a bad day. This is premeditated malicious behavior from someone looking to leach off the work of other people.

People are mad because the guy is a selfish asshole, who has a clear history of coping from others, being directly abusive, and demonstrated intent to attempt to impersonate the core ziglang team/org... not because he dared to use AI.


I agree partially.

I do think that it was weird to focus on the AI aspect so much. AI is going to pollute everything going forward whether you like it or not. And honestly who cares, either it is a good ressource for learning or it’s not. You have to decide that for yourself and not based on whether AI helped writing it.

However I think some of the critique was because he stole the code for the interactive editor and claimed he made it himself, which of course you shouldn’t do.


You can correct me if I'm wrong, but I believe the actual claim was that Zigbook had not complied with the MIT license's attribution clause for code someone believed was copied. MIT only requires attribution for copies of "substantial portions" of code, and the code copied was 22 lines.

Does that count as substantial? I'm not sure because I'm not a lawyer, but this was really an issue about definitions in an attribution clause over less code than people regularly copy from stack overflow without a second thought. By the time this accusation was made, the Zigbook author was already under attack from the community which put them in a defensive posture.

Now, just to be clear, I think the book author behaved poorly in response. But the internet is full of young software engineers who would behave poorly if they wrote a book for a community and the community turned around and vilified them for it. I try not to judge individuals by the way they behave on their worst days. But I do think something like a community has a behavior and culture of its own and that does need to be guided with intention.


> You can correct me if I'm wrong, but I believe the actual claim was that Zigbook had not complied with the MIT license's attribution clause for code someone believed was copied. MIT only requires attribution for copies of "substantial portions" of code, and the code copied was 22 lines.

Without including proper credit, it is classic infringement. I wouldn't personally call copyright infringement "theft", though.

Imagine for a moment, the generosity of the MIT license: 'you can pretty much do anything you want with this code, I gift it to the world, all you have to do is give proper credit'. And so you read that, and take and take and take, and can't even give credit.

> Now, just to be clear, I think the book author behaved poorly in response

Precisely: maybe it was just a mistake? So, the author politely and professionally asks, not for the infringer to stop using the author's code, but just to give proper credit. And hey, here's a PR, so doing the right thing just requires an approval!

The infringer's response to the offer of help seemed to confirm that this was not a mistake, but rather someone acting in bad faith. IMO, people should learn early on in their life to say "I was wrong, I'm sorry, I'll make it right, it won't happen again". Say that when you're wrong, and the respect floods in.

> By the time this accusation was made, the Zigbook author was already under attack

This is not quite accurate, from my recollection of events (which could be mistaken!): the community didn't even know about it until after the author respectfully, directly contacted the infringer with an offer to help, and the infringer responded with hostility and what looked like a case of Oppositional Defiant Disorder.


> I do think that it was weird to focus on the AI aspect so much. AI is going to pollute everything going forward whether you like it or not.

The bigger issue is that they claimed no AI was used. That’s an outright lie which makes you think if you should trust anything else about it.

> And honestly who cares, either it is a good ressource for learning or it’s not. You have to decide that for yourself and not based on whether AI helped writing it.

You have no way of knowing if something is a good resource for learning until you invest your time into it. If it turns out it’s not a good resource, your time was wasted. Worse, you may have learned wrong ideas you now have to unlearn. If something was generated with an LLM, you have zero idea which parts are wrong or right.


I agree with you. It is shitty behavior to say it is not AI written when it clearly is.

But I also think we at this point should just assume that everything is partially written using AI.

For your last point, I think this was also a problem before LLMs. It has of course become easier to fake some kind of ethos in your writing, but it is also becoming easier to spot AI slop when you know what to look after right?


> I agree with you. It is shitty behavior to say it is not AI written when it clearly is.

> But I also think we at this point should just assume that everything is partially written using AI.

Using "but" here implies your 2nd line is a partial refutation to the first. No one would have been angry if he'd posted it without clearly lying. Using AI isn't what pissed anyone off, being directly lied to (presumably to get around the strict "made by humans" rules across all the various Zig communities). Then there was the abusive PR edits attacking someone that seems to have gotten him banned. And his history of typosquatting, both various crypto surfaces, and cursor, and the typosquatting account for zigglang. People are mad because the guy is a selfish asshole, not because he dared to use AI.

Nothing I've written has been assisted by AI in any way, and I know a number of people who do and demand the same. I don't think it's a reasonable default assumption.


> turned off by the constant Zig spam approach to marketing

? what? from my experience zig marketing is pretty mid. it is nowhere at the level of rust.

heck, rust evangelism strikeforce made me hate rust and all the people promote it, even for now.


You're assuming they are a teenager but you don't know. They used code without attribution and when asked to do so, they edited the comment and mocked the requestor. And you're calling the zig community the bully? They lied about not using AI. This kind of dishonesty does not need to be tolerated.

Disservice? Rust is taking over the world while they still have nothing to show basically (Servo, the project Rust was created for, is behind ladybird of all things). Every clueless developer and their dog thinks Rust is like super safe and great, with very little empirical evidence still after 19 years of the language's existence.

Zig people want Zig to "win". They are appearing on Hacker News almost every day now, and for that purpose this kind of things matters more than the language's merits themselves. I believe the language has a good share of merits though, far more than Rust, but it's too early and not battle tested to get so much attention.



FWIW, all of those links compare Rust to languages created before 1980, and are all projects largely and unusually independent of the crates ecosystem and where dynamic linking does not matter. If you're going to use a modern language anyway, you should do due diligence and compare it with something like Swift as the ladybird team is doing right now, or even a research language like Koka. There is a huge lack of evidence for Rust vs other modern languages and we should investigate that before we lock ourselves into yet another language that eventually becomes widely believed to suck.

Here's what Microsoft decided after a comparison to C#: https://www.theregister.com/2024/01/31/microsoft_seeks_rust_...

Microsoft isn't going to abandon C#, it's just using the right tool for the right job. While there are certainly cases where it is justified to go lower level and closer to the metal, writing everything in Rust would be just as dumb as writing everything in C# or god forbid, JS.

I'm not claiming otherwise. I'm just saying that saving some hundreds of millions of dollars on compute is what Rust as a language can enable.

This, I was shocked when I read the first version. I get it if you’re an influencer, but as a programming language people need to expect you can manage your emotions and be objective

And discussion about this not so much important part of the statement started once again ...

that's a long time between edits. as a single contributor to my own posts, i usually achieve a like iteration within minutes. did they have to have a board meeting in between the changes? lovely conservative process. "rookies", love it

I like the first version the best.

More and more people should call out bloated buggy JS frameworks lol

Isn't github a rails app that heavily uses server side rendering?

Not any longer. The rewrite which destroyed performance uses ReactJS https://news.ycombinator.com/item?id=44799861

What is terrible is that new developers think that this has been the usual poor state of things...this is why Zig & others moving to alternate platforms is good.


I'll be honest, I don't use github often. So if they're wrong, well, they fucked up in their complaint that could be redirected to one of many other websites instead.

fair enough! To be clear - a rails app and a bloated js app are not mutually exclusive. From my observations though, github feels slow because it feels slow, not because of js shittiness

Nice that they cleaned it up, but Andrew has a pattern of coming across across as even less mentally stable than the Notepad++ dev, which isn't a good look for a BDFL. For example, he randomly broke down in tears during a presentation not long ago.

this Corporate Americanism is of only positivity and fake smiles is exactly how we end up with enshittified products, because no one is ever called out for it. If the feedback is too soft, it just gets swept under the rug.

we need less self censorship, not more.


No, the edits are better. The original message made unwarranted assumptions, and used intentionally inaccurate language. That's objectively bad communication.

It's not a binary choice between insults (escalates conflict, destabilizes rational decision making) vs hiding your opinions. That's what the word tact is for. It's simply, quite literally, a skill issue if someone can't find a middle ground between those two failure modes.


Fully agreed. I can't upvote yet (nto enough Karma) but corpospeak is IMO never the solution unless your in court or something.

was github ever ~not kinda buggy?

blaming framework on low quality software is a skill issue

The original version is fine.

GitHub is critical infrastructure for many projects and pushing AI slop is not acceptable.

They have the money to pay for quality development time.


You missed the monkeys. That was my highlight. My team was called "code monkeys" once.

What is the point of this post? To shame the author?

I, for one, welcome our Next Linus Torvalds.

Reads like an official White House statement[0].

[0] https://www.whitehouse.gov/articles/2025/03/yes-biden-spent-...


this seems unfair; I didn't see any terrible (both concept and execution) AI generated art accompanying their statement here.

The fact that three revisions were needed to tone down inflammatory language could raise questions about impulse control in leadership decisions (regularly prioritizing ideological positions over pragmatic stability). This is notable given that Zig has been in development since 2015 and remains at version 0.15.1 as of August 2025.

If this obnoxious and seemingly ubiquitous platitude were actually true, then torture would be a moral duty. Enforced poverty would be a moral duty. Governments would be obligated to regularly arrange mass starvations for their citizens.

I don't believe it. Personally, I think spiritual weakness and religious corruption are more likely culprits -- and not necessarily the type of spirituality or religion that you might be thinking of.

Either way, "good times" is a dangerous place to put the blame. It relieves us of responsibility for our own catastrophes (it was the good times' fault), and it makes us suspicious of prosperity and happiness.

Good times are not evil. We don't need to shun them, provided we keep strengthening the better angels of our nature.


I've been considering Cloudflare for caching, DDoS protection and WAF, but I don't like furthering the centralization of the Web. And my host (Vultr) has had fantastic uptime over the 10 years I've been on them.

How are others doing this? How is Hacker News hosted/protected?


> Teams would chomp at the bit to do it to boost their own performance.

This assumes teams care more about performance than comfort and convenience. Many teams care about both. And which one wins out can vary. It can even change over time.

The question is how to incentivize what, and what methods are the most effective at doing that for a particular team at a given point in time.


> And which one wins out can vary. It can even change over time.

Well good reason to let each team pick. Teams that care about performance will eventually do a lot better. The claim with these mandate is that the benefits are "obvious". So if it's very obvious they should be visible fairly quickly.


I'm a solopreneur. Yesterday, in 90 minutes, I developed an enhancement that would have taken a full day before. I did it by writing a detailed Markdown spec and feeding it to Copilot Agents running Sonnet 4.5. Copilot worked on it on a server somewhere while I ate lunch.

When I returned, I reviewed the PR. It wasn't perfect; one of the dozen or so tests that Copilot had generated didn't actually test what it purported to. So, I fixed that by hand. It wasn't a big deal. It was still quicker and took less cognitive effort than writing the entire PR myself.

I'll confess that part of me is pleased to be dismissed with epithets like "AI-pilled," because properly using LLMs is an enormous competitive advantage. The more negative sentiment around them there is, the less likely the competition is to be inclined to learn and master them.


it seems like one of the most important parts of being an AI booster is getting to feel better than everyone else


My main reasons for using Windows right now are:

- Davinci Resolve

- Adobe suite

- AutoHotkey scripts, lots of them

- Microsoft Office, mainly PowerPoint, Excel and Word for creating and interacting with other companies' docs. Libre/OpenOffice mangled them/were missing features I depend on

- Issues with my laptop's Nvidia card (screen tearing etc.) last time I tried to switch, and rabbit holes that I don't have time for anymore (solopreneur)

That said, I would love to switch back. I loved rofi [0] last time, for example.

Can anyone speak to the above? What's the status of running Windows apps like Adobe, Resolve, Office, for instance? Or AutoHotkey or equivalent?

0: https://github.com/davatorium/rofi


About AutoHotKey, you can do similar stuff as long as you are using X11 as there are various utilities for it, such as xdotool[0]. There is even an AutoHotKey-for-Linux project[1] (it also needs X11 - the author did try to port it to Wayland but gave up). For Wayland there are some alternatives like ydotool[2] (actually AFAIK since ydotool uses some daemon to inject events it works with anything, not just Wayland, but on the other hand it only provides a basic tiny subset of xdotool's commands) but the core protocol isn't particularly friendly to such automation.

[0] https://github.com/jordansissel/xdotool

[1] https://github.com/phil294/AHK_X11

[2] https://github.com/ReimuNotMoe/ydotool


I suspect the problem they were indicating with “AutoHotKey scripts, lots of them” is that they just have a lot of scripts they’d need to convert. I get it—even switching to a new WM or distro can be a real pain.


Well, they did mention "AHK or equivalent" so it sounds like converting them isn't out of the question.


Things that used to be prohibitive are made much easier with AI these days. Especially tasks like this that do something fairly small and isolated and are easy to test.


There is nothing like AHK. All mentioned tools are toys in comprison.


I wish there was something like Keyboard Maestro for Windows or Linux. It seems like there isn’t. I’d love to be corrected, though!

From what I can tell, AHK can’t do (m)any of the cool things that KM does, like “Click at Found Image”, “Set the Find Pasteboard”, “Prompt for Screen Rectangle”, “Stream Deck Show OK”, “Increase Song Rating by Half a Star”, “OCR Image/Screen”, “Paste from Named Clipboard”, or many other useful actions. Is there any Windows or Linux application that can?


AHK can do all that, but you need to program a script, there is no out of the box solution.

For example: https://www.autohotkey.com/boards/viewtopic.php?t=134045


Thank you, good to know AHK is able to do part of what Keyboard Maestro can, it just requires more work and troubleshooting for every script/macro.

That example script also doesn’t let you simply drag a marquee selection to choose the image to find, you have to provide a file path to an image that already exists. That’s not

There must be a market for something more user friendly, at least on Windows.


> That’s not… an option for the workflow it’s currently being used in. I’ll hunt for more user-generated scripts to modify on that forum. Thanks for your response.


Agreed. AHK and Windows’ amenability to such things is an important reason why Windows is still my preferred GUI by far.


ydotool helps bridge the gap in wayland. xdotool replacements ate even more essential since wayland strips away most of the hooks into windows.


Davinci Resolve has official support for Linux


Oh wow, thanks for this. I had filed it under "Windows and Mac only" in my head for some reason. Now I see that it was originally Linux only!?

Amazing that this free-to-download application supports Linux when Adobe doesn't. Or maybe not so amazing given their different approaches.



- Davinci Resolve

Has nativ support.

- Adobe suite

- Microsoft Office

https://www.winboat.app But beta.

- AutoHotkey scripts, lots of them

I'm afraid there is no easy way. https://pyautogui.readthedocs.io/en/latest/

- Issues with my laptop's Nvidia card

Get AMD.

> I don't have time for anymore (solopreneur)

Fair. Second PC to play around from time to time is probably the best in this case. But I fully understand, unless as a hobby, that infesting a lot of time makes little sense.


Linux for me is all about customization and control, particularly of hardware, which you'd usually do for optimization (performance, workflow, latency, stability), which is fun if you care about optimization and efficiency, but for "good enough/I'm used to it/I'm a satisfied paying customer" I suppose there's no reason to investigate or risk. The market has poured loads of capital into satisfying PC multimedia use-case.

I'd suspect there's probably versions of all those that have been made to function basically through WINE.

If your curious, it's very easy to use it as a hypervisor, and pull out what you can, though IOMMU/SR-IOV might be tricky.

Alternatively, checking if Blender/GIMP service your use cases wouldn't even require switching...

AutoHotKey has been solved a lot of different ways, for sure.

But yeah, granular detailed control over your hardware is still the primary use-case for Linux, so if you view bad defaults, annoying install procedures, occasional show stopping bugs a hindrance rather than an opportunity, maybe it's not a strong candidate.


I hear that. I enjoy that kind of tinkering; I just have too much on my plate with my business to go as deep into it as I used to. But I'm still interested in Linux, if only because it's a much-needed third option. I've been on and off it as a daily driver over the years.

I'm guessing others here who are primarily on Windows can relate to this. We've been disappointed with what Apple and Microsoft are doing, and we want, not necessarily more customization of our OS, just less interference.


I don't use it as much nowadays but https://github.com/ublue-os/aurora (kde desktop + automatic updates + baked in nvidia drivers) got me an as painless as possible nvidia experience (on a laptop I got my nvidia gpu to power down while idle which had been a huge time sink trying to figure out how to do on my own). Didn't notice any screen tearing personally, but that's probably something that depends on what applications / workflows one has.

In terms of office compatibility OnlyOffice iirc has the best compatibility. Easy to install via flatpak (I really enjoy this move in desktop linux because now I can easily remove network access from apps / set the permissions I want).

The only thing that seems unsurmountable is probably Adobe, not sure how much of a dealbreaker that is.


> Issues with my laptop's Nvidia card (screen tearing etc.)

I'm running Ubuntu on a laptop with a 3070m. I don't have any issues like this. I did have issues related to using an external monitor but they all seemed to resolve when I switched from Gnome Wayland to Gnome X11.


1. office.com

2. Google drive/docs/*

3. Hacky office on Linux work around - several found on github

Davinci Resolve seems to run faster on Linux.


Yes, and in fact, Lena's response is part of the dialog. And its dismissiveness is telling. Not only does it reflect her attitude toward her constituents, it also exposes her tacit premise that digital communications are somehow unreal.

It's as if, for her, only phone calls, speeches, or handwritten letters would be enough to start a dialog. She seems to be under the misapprehension that digital communication is something to which norms and laws and, fundamentally, rights don't apply. Which is a misguided and dangerous belief.


Textbook CDU conduct. The best democracy money can buy!!1


Yes, the article's insistence that anyone would have fallen for the phish, and that anyone who disagrees is simply "wrong," is unfortunate. My old corporate phishing training drilled it into my head pretty effectively that you don't follow links in emails if the emails aren't direct responses to actions you've just taken: registering an account, resetting a password, and so forth.

To this day, I don't follow links in other kinds of emails. I mouse over the link to view the domain as a first step in determining how seriously to take the email. If the domain appears to match the known-good one, I copy the link and examine the characters to see if any Unicode lookalikes have been employed.

If the domain seems legitimate, or if I don't recognize it but the email is so convincing that I suspect the company truly is using a different domain (my bank has done this, frustratingly), I still don't click the link. I log in to my account on the known-good domain -- by typing it by hand into the browser's address bar -- and look for notifications.

If there are no notifications, then I might contact the company about the email to verify its authenticity.

If anyone reading thinks that seems like a lot of work, I agree with you! It stinks. But I humbly submit that it's necessary on today's Internet. And it's especially necessary if you're in charge of globally used software libraries.

To adopt the tone of the article's author, if they aren't willing to do that, they're wrong, and they're going to keep getting phished.


Anyone is a literal stretch, but "almost anyone" seems pretty true. How many people do you think follow your very security minded, but quite long-winded practice? 1 in 1000?, 1 in 10,000? 1 in 100,000? Less?

I think the vast vast majority of people would have fallen for it, it's a decent looking message, it has a sense of urgency and the domain doesn't look wildly wrong. Devs in theory might be more security aware, but also we work with a lot of different apps, systems and sites - mixed domains, weird deep-links, redirects we've all used (and possibly even deployed) such setups.

Add in most of my email is now through a corporate outlook, so domains aren't very visible it's all nestled behind "safelinks", and personal email is often on a phone so mousing over a link just isn't muscle memory anymore.

I think I'd be suspicious at the request, but possibly have clicked to see more, especially with the threat things might stop working soon. Maybe NPM/package platforms should be pushing security training to their biggest maintainers like your old corporation did, but for now they don't and the idea that people should be more aware of the risk is sort of the point.

Almost anyone would have fallen for that, thats why almost all of us need to be reminded to think of this stuff more.


Thank you for implying I'm one in a million, but this just underscores why I avoid ecosystems like Node in favor of more top-down ones like .NET.

When a lone developer is untrained and doesn't follow best practices, as happened here, the community rushes to their defense on the grounds of empathy: "We would ALL make this mistake." But what if we wouldn't? What if we're trained and have certain safety protocols and procedures that we hold ourselves to?

This is why, at the end of the day, I run my company on a more centralized ecosystem, for all its warts. At least there's the promise of standard practices and procedures and training, whether it's always perfectly fulfilled or not. With a community-driven ecosystem, you don't have that: You're relying on the standards of the community, a vague and nebulous group that doesn't necessarily have any security sense, as you rightly pointed out. I realize not everyone has the luxury of making that choice due to career/financial constraints.


> Yes, the article's insistence that anyone would have fallen for the phish, and that anyone who disagrees is simply "wrong," is unfortunate

I think that's overstated. This phishing attempt had some obvious red flags that many people here would have noticed, sure. So not everyone is going to fall for this phish.

But the principle is better expressed as "Everyone will fall for a phish", somewhere. Even you. Human engineering is human engineering and we're all fallible. All that's required is that someone figure out which mistakes you're likely to make.


Agreed; the rich standard library from Microsoft is one of the many things I appreciate about C#.

The article's author seems to be under the misapprehension that standard libraries should or have to be community-driven like Node's and that falling for phishing attacks is inevitable over a long enough period of time. Neither notion is accurate.


I read AI coding negativity on Hacker News and Reddit with more and more astonishment every day. It's like we live in different worlds. I expect the breadth of tooling is partly responsible. What it means to you to "use the LLM code" could be very different from what it means to me. What LLM are we talking about? What context does it have? What IDE are you using?

Personally, I wrote 200K lines of my B2B SaaS before agentic coding came around. With Sonnet 4 in Agent mode, I'd say I now write maybe 20% of the ongoing code from day to day, perhaps less. Interactive Sonnet in VS Code and GitHub Copilot Agents (autonomous agents running on GitHub's servers) do the other 80%. The more I document in Markdown, the higher that percentage becomes. I then carefully review and test.


> B2B SaaS

Perhaps that's part of it.

People here work on all kinds of industries. Some of us are implementing JIT compilers, mission-critical embedded systems or distributed databases. In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.


> People here work on all kinds of industries.

Yes, it would be nice to have a lot more context (pun intended) when people post how many LoC they introduced.

B2B SaaS? Then can I assume that a browser is involved and that a big part of that 200k LoC is the verbose styling DSL we all use? On the other hand, Nginx, a production-grade web server, is 250k LoC (251,232 to be exact [1]). These two things are not comparable.

The point being that, as I'm sure we all agree, LoC is not a helpful metric for comparison without more context, and different projects have vastly different amounts of information/feature density per LoC.

[1] https://openhub.net/p/nginx


I primarily work in C# during the day but have been messing around with simple Android TV dev on occasion at night.

I’ve been blown away sometimes at what Copilot puts out in the context of C#, but using ChatGPT (paid) to get me started on an Android app - totally different experience.

Stuff like giving me code that’s using a mix of different APIs and sometimes just totally non-existent methods.

With Copilot I find sometimes it’s brilliant but it’s so random as to when that will be it seems.


> Stuff like giving me code that’s using a mix of different APIs and sometimes just totally non-existent methods.

That has been my experience as well. We can control the surprising pick of APIs with basic prompt files that clarify what and how to use in your project. However, when using less-than-popular tools whose source code is not available, the hallucinations are unbearable and a complete waste of time.

The lesson to be learned is that LLMs depend heavily on their training set, and in a simplistic way they at best only interpolate between the data they were fed. If a LLM is not trained with a corpus covering a specific domain them you can't expect usable results from it.

This brings up some unintended consequences. Companies like Microsoft will be able to create incentives to use their tech stack by training their LLMs with a very thorough and complete corpus on how to use their technologies. If Copilot does miracles outputting .NET whereas Java is unusable, developers have one more reason to adopt .NET to lower their cost of delivering and maintaining software.


  > when people post how many LoC they introduced.
Pretty ironic you and the GP talk about lines of code.

From the article:

  Garman is also not keen on another idea about AI – measuring its value by what percentage of code it contributes at an organization.

  “It’s a silly metric,” he said, because while organizations can use AI to write “infinitely more lines of code” it could be bad code.

  “Often times fewer lines of code is way better than more lines of code,” he observed. “So I'm never really sure why that's the exciting metric that people like to brag about.”
I'm with Garman here. There's no clean metric for how productive someone is when writing code. At best, this metric is naive, but usually it is just idiotic.

Bureaucrats love LoC, commits, and/or Jira tickets because they are easy to measure but here's the truth: to measure the quality of code you have to be capable of producing said code at (approximately) said quality or better. Data isn't just "data" that you can treat as a black box and throw in algorithms. Data requires interpretation and there's no "one size fits all" solution. Data is nothing without its context. It is always biased and if you avoid nuance you'll quickly convince yourself of falsehoods. Even with expertise it is easy to convince yourself of falsehoods. Without expertise it is hopeless. Just go look at Reddit or any corner of the internet where there's armchair experts confidently talking about things they know nothing about. It is always void of nuance and vastly oversimplified. But humans love simplicity. You need to recognize our own biases.


> Pretty ironic you and the GP talk about lines of code.

I was responding specifically to the comment I replied to, not the article, and mentioning LoC as a specific example of things that don't make sense to compare.


  > the comment I replied to
Which was the "GP", or "grand parent" (your comment is the parent of my comment), that I was referring to.


> Bureaucrats love LoC

Looks like vibe-coders love them too, now.


...but you repeat yourself (c:


Made me think of a post from a few days ago where Pournelle's Iron Law of Bureaucracy was mentioned[0]. I think vibe coders are the second group. "dedicated to the organization itself" as opposed to "devoted to the goals of the organization". They frame it as "get things done" but really, who is not trying to get things done? It's about what is getting done and to what degree is considered "good enough."

[0] https://news.ycombinator.com/item?id=44937893


On the other hand, fault-intolerant codebases are also often highly defined and almost always have rigorous automated tests already, which are two contexts where coding agents specifically excel in.


I work on brain dead crud apps much of my time and get nothing from LLMs.


Try Claude Code. You’ll literally be able to automate 90% of the coding part of your job.


We really need to add some kind of risk to people making these claims to make it more interesting. I listened to the type of advice you're giving here on more occasions than I can remember, at least once for every major revision of every major LLM and always walked away frustrated because it hindered me more than it helped.

> This is actually amazing now, just use [insert ChatGPT, GPT-4, 4.5, 5, o1, o3, Deepseek, Claude 3.5, 3.9, Gemini 1, 1.5, 2, ...] it's completely different from Model(n-1) you've tried.

I'm not some mythical 140 IQ 10x developer and my work isn't exceptional so this shouldn't happen.


The dark secret no one from the big providers wants to admit is that Claude is the only viable coding model. Everything else descends into a mess of verbose spaghetti full of hallucinations pretty quickly. Claude is head and shoulders above the rest and it isn't even remotely close, regardless of what any benchmark says.


Stopping by to concur.

Tried about four others, and to some extent I always marveled about capabilities of latest and greatest I had to concede they didn’t make faster. I think Claude does.


As a GPT user, your comment triggered me wanting to search how superior is Claude... well, these users don't think it is: https://www.reddit.com/r/ClaudeAI/comments/1l5h2ds/i_paid_fo...


>As a GPT user, your comment triggered me wanting to search how superior is Claude... well, these users don't think it is: https://www.reddit.com/r/ClaudeAI/comments/1l5h2ds/i_paid_fo...

That poster isn't comparing models, he's comparing Claude Code to Cline (two agentic coding tools), both using Claude Sonnet 4. I was pretty much in the same boat all year as well; using Cline heavily at work ($1k+/month token spend) and I was sold on it over Claude Code, although I've just recently made the switch, as Claude Code has a VSCode extension now. Whichever agentic tooling you use (Cline, CC, Cursor, Aider, etc.) is still a matter of debate, but the underlying model (Sonnet/Opus) seems to be unanimously agreed on as being in a league of its own, and has been since 3.5 released last year.


I've been working on macOS and Windows drivers. Can't help but disagree.

Because of the absolute dearth of high-quality open-source driver code and the huge proliferation of absolutely bottom-barrel general-purpose C and C++, the result is... Not good.

On the other hand, I asked Claude to convert an existing, short-ish Bash script to idiomatic PowerShell with proper cmdlet-style argument parsing, and it returned a decent result that I barely had to modify or iterate on. I was quite impressed.

Garbage in, garbage out. I'm not altogether dismissive of AI and LLMs but it is really necessary to know where and what their limits are.


I'm pretty sure the GP referred to GGP's "brain dead CRUD apps" when they talked about automating 90% of the work.


I found the opposite - I am able to get 50% improvement in productivity for day to day coding (mix of backend, frontend), mostly in Javascript but have helped in other languages. But you have to carefully review though - and have extremely well written test cases if you have to blindly generate or replace existing code.


> In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.

This is a false premise. LLMs themselves don't force you to introduce breaking changes into your code.

In fact, the inception of coding agents was lauded as a major improvement to the developer experience because they allow the LLMs themselves to automatically react to feedback from test suites, thus speeding up how code was implemented while preventing regressions.

If tweaking your code can result in breaking a million things, this is a problem with your code and how you worked to make it resilient. LLMs are only able to introduce regressions if your automated tests are unable to catch any of these million of things breaking. If this is the case then your problems are far greater than LLMs existing, and at best LLMs only point out the elephant in the room.


Perhaps the issue is you were used to writing 200k lines of code. Most engineers would be agast at that. Lines of code is a debit not a credit


I am now making an emotional reaction based on zero knowledge of the B2B codebase's environment, but to be honest I think it is relevant to the discussion on why people are "worlds apart".

200k lines of code is a failure state. At this point you have lost control and can only make changes to the codebase through immense effort, and not at a tolerable pace.

Agentic code writers are good at giving you this size of mess and at helping to shovel stuff around to make changes that are hard for humans due to the unusable state of the codebase.

If overgrown barely manageble codebases are all a person's ever known and they think it's normal that changes are hard and time-consuming and needing reams of code, I understand that they believe AI agents are useful as code writers. I think they do not have the foundation to tell mediocre from good code.

I am extremely aware of the judgemental hubris of this comment. I'd not normally huff my own farts in public this obnoxiously, but I honestly feel it is useful for the "AI hater vs AI sucker" discussion to be honest about this type of emotion.


It really depends on what your use case is. E.g. of you're dealing with a lot of legacy integrations, dealing with all the edge cases can require a lot of code that you can't refactor away through cleverness.

Each integration is hopefully only a few thousand lines of code, but if you have 50 integrations you can easily break 100k loc just dealing with those. They just need to be encapsulated well so that the integration cruft is isolated from the core business logic, and they become relatively simple to reason about


> 200k lines of code is a failure state.

What on earth are you talking about? This is unavoidable for many use-cases, especially ones that involve interacting with the real world in complex ways. It's hardly a marker of failure (or success, for that matter) on its own.


If all your code depends on all your other code, yeah 200k lines might be a lot. But if you actually know how to code, I fail to understand why 200k lines (or any number) of properly encapsulated well-written code would be a problem.

Further, if you yourself don't understand the code, how can you verify that using LLMs to make major sweeping changes, doesn't mess anything up, given that they are notorious for making random errors?


200k loc is not a failure state. suppose your b2b saas has 5 user types and 5 downstream SAASes it connects to, thats 20k loc per major programming unit. not so bad.


That's actually insane.


I agree on principle, and I'm sure many of us know how much of a pain it is to work on million or even billion dollar codebases, where even small changes can be weeks of beauracracy and hours of meetings.

But with the way the industry is, I'm also not remotely surprised. We have people come and go as they are poached, burned out, or simply life circumstances. The training for the new people isn't the best, and the documentation for any but the large companies are probably a mess. We also don't tend to encourage periods to focus on properly addressing tech debt, but focusing on delivering features. I don't know how such an environment over years, decades doesn't generate so much redundant, clashing, and quirky interactions. The culture doesn't allow much alternative.

And of course, I hope even the most devout AI evangelists realize that AI will only multiply this culture. Code that no one may even truly understand, but "it works". I don't know if even Silicon Valley (2014) could have made a parody more shocking than the reality this will yield.


In that case, LLMs are full on debt-machines.


Ones that can remediate it though. If I am capable of safely refactoring 1,000 copies of a method, in a codebase that humans don’t look at, did it really matter if the workload functions as designed?


Jeebus, 'safely' is carrying a hell of a lot of water there...


In a type safe language like C# or Java, why could you need an LLM for that? it’s a standard guaranteed safe (as long as you aren’t using reflection) refactor with ReSharper.


Features present in all IDEs over the last 5 years or so are better and more verifiably correct for this task than probabilistic text generators.


You might have meant "code is a liability not an asset"


  Lines of code is a debit not a credit
Perhaps you meant this the other way around. A credit entry indicates an increase in the amount you owe.


It's a terrible analogy either way. It should be each extra line of code beyond the bare minimum is a liability.


You are absolutely correct, I am not a finance wizard


Liability vs asset is what you were trying to say, I think, but everyone says that, so to be charitable I think you were trying to put a new spin on the phrasing, which I think is admirable, to your credit.


It's interesting how LLM enthusiasts will point to problems like IDE, context, model etc. but not the one thing that really matters:

Which problem are you trying to solve?

At this point my assumption is they learned that talking about this question will very quickly reveal that "the great things I use LLMs for" are actually personal throwaway pieces, not to be extended above triviality or maintained over longer than a year. Which, I guess, doesn't make for a great sales pitch.


It's amazing to make small custom apps and scripts, and they're such high quality (compared to what I would half-ass write and never finish/polish them) that they don't end up as "throwaway", I keep using them all the time. The LLM is saving me time to write these small programs, and the small programs boost my productivity.

Often, I will solve a problem in a crappy single-file script, then feed it to Claude and ask to turn it into a proper GUI/TUI/CLI, add CI/CD workflows, a README, etc...

I was very skeptical and reluctant of LLM assisted coding (you can look at my history) until I actually tried it last month. Now I am sold.


At work I need often smaller, short lived scripts to find this or that insight, or to use visualization to render some data and I find LLMs very useful at that.

A non coding topic, but recently I had difficulty articulating a summarized state of a complex project, so I spoke 2 min in the microphone and it gave me a pretty good list of accomplishments, todos and open points.

Some colleagues have found them useful for modernizing dependencies of micro services or to help getting a head start on unit test coverage for web apps. All kinds of grunt work that’s not really complex but just really moves quite some text.

I agree it’s not life changing, but a nice help when needed.


I use it to do all the things that I couldn't be bothered to do before. Generate documentation, dump and transform data for one off analyses, write comprehensive tests, create reports. I don't use it for writing real production code unless the task is very constrained with good test coverage, and when I do it's usually to fix small but tedious bugs that were never going to get prioritized otherwise.


There is definitely a divide in users - those for which it works and those for which it doesn't. I suspect it comes down to what language and what tooling you use. People doing web-related or python work seem to be doing much better than people doing embedded C or C++. Similarly doing C++ in a popular framework like QT also yields better results. When the system design is not pre-defined or rigid like in QT, then you get completely unmaintainable code as a result.

If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.


While I agree that AI assisted coding probably works much better for languages and use cases that have a lot more relevant training data, when I read comments from people who like LLM assisted coding vs. those that don't, I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.

The primary difference I see in people who get the most value from AI tools is that they expect it to make mistakes: they always carefully review the code and are fine with acting, in some cases, more like an editor than an author. They also seem to have a good sense of where AI can add a lot of value (implementing well-defined functions, writing tests, etc.) vs. where it tends to fall over (e.g. tasks where large scale context is required). Those who can't seem to get value from AI tools seem (at least to me) less tolerant of AI mistakes, and less willing to iterate with AI agents, and they seem more willing to "throw the baby out with the bathwater", i.e. fixate on some of the failure cases but then not willing to just limit usage to cases where AI does a better job.

To be clear, I'm not saying one is necessarily "better" than the other, just that the reason for the dichotomy has a lot more to do with the programmers than the domain. For me personally, while I get a lot of value in AI coding, I also find that I don't enjoy the "editing" aspect as much as the "authoring" aspect.


Yes, and each person has a different perception of what is "good enough". Perfectionists don't like AI code.


My main reason is: Why should I try twice or more, when I can do it once and expand my knowledge? It's not like I have to produce something now.


If it takes 10x the time to do something, did you learn 10x as much? I don't mind repetition, I learned that way for many years and it still works for me. I recently made a short program using ai assist in a domain I was unfamiliar with. I iterated probably 4x. Iterations were based on learning about the domain both from the ai results that worked and researching the parts that either seemed extraneous or wrong. It was fast, and I learned a lot. I would have learned maybe 2x more doing it all from scratch, but I would have taken at least 10x the time and effort to reach the result, because there was no good place to immerse myself. To me, that is still useful learning and I can do it 5x before I have spent the same amount of time.

It comes back to other people's comments about acceptance of the tooling. I don't mind the somewhat messy learning methodology - I can still wind up at a good results quickly, and learn. I don't mind that I have to sort of beat the AI into submission. It reminds me a bit of part lecture, part lab work. I enjoy working out where it failed and why.


The fact is that most people skip learning about what works (learning is not cheap mentally). I’ve seen teammates just trying stuff (for days) until something kinda works instead of spending 30 mns doing research. The fact is that LLMs are good for producing something that looks correct, and waste the reviewer time. It’s harder to review something than writing it from scratch.

Learning is also exponential, the more you do it, the faster it is, because you may already have the foundations for that particular bit.


> I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.

The problem with this perspective is that anyone who works on more niche programming areas knows the vast majority of programming discussion online aren't relevant to them. E.g., I've done macOS/iOS programming most of my career, and I now do work that's an order of magnitude more niche than that, and I commonly see programmers saying thing like "you shouldn't use a debugger", which is a statement that I can't imagine a macOS or iOS programmer saying (don't get me wrong they're probably out there, I've just never met or encountered one). So you just become use to most programming conversations being irrelevant to your work.

So of course the majority of AI conversations aren't relevant to your work either, because that's the expectation.

I think a lot of these conversations are two people with wildly different contexts trying to communicate, which is just pointless. Really we just shouldn't be trying to participate in these conversations (the more niche programmers that is), because there's just not enough shared context to make communication effective.

We just all happen to fall under this same umbrella of "programming", which gives the illusion of a shared context. It's true there's some things that are relevant across the field (it's all just variables, loops, and conditionals), but many of the other details aren't universal, so it's silly to talk about them without first understanding the full context around the other persons work.


> and I commonly see programmers saying thing like "you shouldn't use a debugger"

Sorry, but who TF says that? This is actually not something I hear commonly, and if it were, I would just discount this person's opinion outright unless there were some other special context here. I do a lot of web programming (Node, Java, Python primarily) and if someone told me "you shouldn't use a debugger" in those domains I would question their competence.


E.g., https://news.ycombinator.com/item?id=39652860 (no specific comment, just the variety of opinions)

Here's a good specific example https://news.ycombinator.com/item?id=26928696


It might boil down to individual thinking styles, which would explain why people tend to talk past each other in these discussions.


No one likes to hear it, but it comes down to prompting skill. People who are terrible at communicating and delegating complex tasks will be terrible at prompting.

It's no secret that a lot of engineers are bad at this part of the job. They prefer to work alone (i.e. without AI) because they lack the ability to clearly and concisely describe problems and solutions.


This. I work with juniors who have no idea what a spec is, and the idea of designing precisely what a component should do, especially in error cases, is foreign to them.

One key to good prompting is clear thinking.


> If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.

I agree with the general premise. There is however more to it than "heavily borrowed". The degree to which a code base is organized and structured and curated plays as big of a role as what framework you use.

If your project is a huge pile of unmaintainable and buggy spaghetti code then don't expect a LLM to do well. If your codebase is well structured, clear, and follows patterns systematically the of course a glorified pattern matching service will do far better in outputting acceptable results.

There is a reason why one of the most basic vibecoding guidelines is to include a prompt cycle to clean up and refactor code between introducing new features. LLMs fare much better when the project in their context is in line with their training. If you refactor your project to align it with what a LLM is trained to handle, it will do much better when prompted to fill in the gaps. This goes way beyond being "heavily borrowed".

I don't expect your average developer struggling with LLMs to acknowledge this fact, because then they would need to explain why their work is unintelligible to a system trained on vast volumes of code. Garbage in, garbage out. But who exactly created all the garbage going in?


I suspect it comes down to how novel the code you are writing is and how tolerant of bugs you are.

People who use it to create a proof of concept of something that is in the LLM training set will have a wildly different experience to somebody writing novel production code.

Even there the people who rave the most rave about how well it does boilerplate.


> When the system design is not pre-defined or rigid like

Why would a LLM be any worse building from language fundamentals (which it knows, in ~every language)? Given how new this paradigm is the far more obvious and likely explanation seems to be: LLM powered coding requires somewhat different skills and strategies. The success of each user heavily depends on their learning rate.


I think there are still lots of code “artisans” who are completely dogmatic about what code should look like, once the tunnel vision goes and you realise the code just enables the business it all of a sudden becomes a velocity God send.


Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?

Your words predict an explosion of unimaginary magnitude for new code and for new buisnesses. Where is it? Nowhere.

Edit: And dont start about how you vibed a SaaS service, show income numbers from paying customers (not buyouts)


There was this recent post about a Cloudflare OAuth client where the author checked in all the AI prompts, https://news.ycombinator.com/item?id=44159166.

The author of the library (kentonv) comments in the HN thread that he said it took him a few days to write the library with AI help, while he thinks it would have taken weeks or months to write manually.

Also, while it may be technically true we're "two years in", I don't think this is a fair assessment. I've been trying AI tools for a while, and the first time I felt "OK, now this is really starting to enhance my velocity" was with the release of Claude 4 in May of this year.


But that example is of writing a green field library that deals with an extremely well documented spec. While impressive, this isn’t what 99% of software engineering is. I’m generally a believer/user but this is a poor example to point at and say “look, gains”.


Do you have some magical insight into every codebase in existence? No? Ok then…


No i don't but by your post it seems like you do. Show us, that is all i request.


I have insight into enough code bases to know its a non zero number. Your logic is bizarre, if you’ve never seen a kangaroo would you just believe they don’t exist?


Show us the numbers, stop wasting our time. NUMBERS.

Also, why would I ever believe kangaroos exist if I haven't seen any evidence of them? this is a fallacy. You are portraying the healthy skepticism as stupid because you already know kangaroos exist.


What numbers? It doesn’t matter if it’s one or a million, it’s had a positive impact on the velocity of a non zero number of projects. You wrote:

> Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?

Yes is the answer. I could probably put it in front of your face and you’d reject it. You do you. All the best.


That’s hardly necessary.

Have we seen a noticeably increased amount of newly launched useful apps?


Why is useful a metric? This is about software delivery, what one person deems useful is subjective


Perhaps I'm misreading the person to whom you're replying, but usefullness, while subjective, isn't typically based on one person's opinion. If enough people agree on the usefullness of something, we as a collective call it "useful".

Perhaps we take the example of a blender. There's enough need to blend/puree/chop food-like-items, that a large group of people agree on the usefullness of a blender. A salad-shooter, while a novel idea, might not be seen as "useful".

Creating software that most folks wouldn't find useful still might be considered "neat" or "cool". But it may not be adding anything to the industry. The fact that someone shipped something quickly doesn't make it any better.


Ultimately, or at least in this discussion, we should decouple the software’s end use from the question of whether it satisfies the creator’s requirements and vision in a safe and robust way. How you get there and what happens after are two different problems.


> Why is useful a metric?

"and you realise the code just enables the business it all of a sudden becomes a velocity God send."

If a business is not useful, well, it will fail. So, so much autogenerated code for nothing.


I see, I guess every business I haven’t used personally, because it wasn’t useful to me, has failed…

Usefulness isn’t a good metric for this.


It's not for nothing. When a profitable product can be created in a fraction of the time and effort previously required, the tool to create it will attract scammers and grifters like bees to honey. It doesn't matter if the "business" around it fails, if a new one can be created quickly and cheaply.

This is the same idea behind brands with random letters selling garbage physical products, only applied to software.


The issue is not with how code looks. It's with what it does, and how it does it. You don't have to be an "artisan" to notice the issues moi2388 mentioned.

The actual difference is between people who care about the quality of the end result, and the experience of users of the software, and those who care about "shipping quickly" no matter the state of what they're producing.

This difference has always existed, but ML tools empower the latter group much more than the former. The inevitable outcome of this will be a stark decline of average software quality, and broad user dissatisfaction. While also making scammers and grifters much more productive, and their scams more lucrative.


Certainly billions of people's personal data will be leaked, and nobody will be held responsible.


I'm not a code "artisan", but I do believe companies should be financially responsible when they have security breaches.


There are very good reason that code should look a certain way and it comes from years of experience and the fact that code is written once but read and modified much more.

When the first bugs come up you see that the velocity was not god sent and you end up hiring one of the many "LLM code fixer" companies that are poping up like mushrooms.


You’re confusing yoloing code into prod and using ai to increase velocity while ensuring it functions and is safe.


No, they're not. It's critically important if you're part of an engineering team.

If everyone does their own thing, the codebase rapidly turns to mush and is unreadable.

And you need humans to be able to read it the moment the code actually matters and needs to stand up to adversaries. If you work with money or personal information, someone will want to steal that. Or you may have legal requirements you have to meet.

It matters.


You’ve made a sweeping statement there, there are swathes of teams working in startups still trying to find product market fit. Focusing on quality in these situations is folly, but that’s not even the point. My point is you can ship quality to any standard using an llm, even your standards. If you can’t that’s a skill issue on your part.


And also ask: "How much money do you spend on LLMs?"

In the long run, that is going to be what drives their quality. At some point the conversation is going to evolve from whether or not AI-assisted coding works to what the price point is to get the quality you need, and whether or not that price matches its value.


> It's like we live in different worlds.

There is the huge variance in prompt specificity as well as the subtle differences inherent to the models. People often don't give examples when they talk about their experiences with AI so it's hard to get a read on what a good prompt looks like for a given model or even what a good workflow is for getting useful code out of it.


Some gave. Some even recorded it, and showed it, because they thought that they are good with it. But they weren’t good at all.

They were slower than coding by hand, if you wanted to keep quality. Some were almost as quick as copy-pasting from the code just above the generated one, but their quality was worse. They even kept some bugs in the code during their reviews.

So the different world is probably what the acceptable level of quality means. I know a lot of coders who don’t give a shit whether it makes sense what they’re doing. What their bad solution will cause in the long run. They ignore everything else, just the “done” state next to their tasks in Jira. They will never solve complex bugs, they simply don’t care enough. At a lot of places, they are the majority. For them, LLM can be an improvement.

Claude Code the other day made a test for me, which mocked everything out from the live code. Everything was green, everything was good. On paper. A lot of people simply wouldn’t care to even review properly. That thing can generate a few thousands of lines of semi usable code per hour. It’s not built to review it properly. Serena MCP for example specifically built to not review what it does. It’s stated by their creators.


Honestly I think LLMs really shine best when your first getting into a language.

I just recently got into JavaScript and typescript and being able to ask the llm how to do something and get some sources and link examples is really nice.

However using it in a language I'm much more familiar with really decreases the usefulness. Even more so when your code base is mid to large sized


I have scaffolded projects using LLMs in languages I don't know and I agree that it can be a great way to learn as it gives you something to iterate on. But that is only if you review/rewrite the code and read documentation alongside it. Many times LLMs will generate code that is just plain bad and confusing even if it works.

I find that LLM coding requires more in-depth understanding, because rather than just coming up with a solution you need to understand the LLMs solution and answer if the complexity is necessary, because it will add structures, defensive code and more that you wouldn't add if you coded it yourself. It's way harder to answer if some code is necessary or the correct way to do something.


This is the one place where I find real value in LLMs. I still wouldn't trust them as teachers because many details are bound to be wrong and potentially dangerous, but they're great initial points of contact for self-directed learning in all kinds of fields.


Yeah this is where I find a lot of value. Typescript is my main language, but I often use C++ and Python where my knowledge is very surface level. Being able to ask it "how do I do ____ in ____" and getting a half decent explanation is awesome.


The best usage is to ask LLM to explain existing code, to search in the legacy codebase.


I've found this to be not very useful in large projects or projects that are very modularized or fragment across many files.

Because sometimes it can't trace down all the data paths and by the time it does it's context window is running out.

That seems to be the biggest issue I see for my daily use anyways


> Some gave. Some even recorded it, and showed it, because they thought that they are good with it. But they weren’t good at all.

Do you have any links saved by any chance?


I'm convinced that for coding we will have to use some sort of TDD or enhanced requirement framework to get the best code. Even on human made systems the quality is highly dependent on the specificity of the requirements and the engineer's ability to probe the edgecases. Something like writing all the tests first (even in something like cucumber) and having the LLM write code to get them to pass would likely produce better code evene though most devs hate the test-first paradigm.


I deal with a few code bases at work and the quality differs a lot between projects and frameworks.

We have 1-2 small python services based on Flask and Pydantic, very structured and a well-written development and extension guide. The newer Copilot models perform very well with this, and improving the dev guidelines keep making it better. Very nice.

We also have a central configuration of applications in the infrastructure and what systems they need. A lot of similarly shaped JSON files, now with a well-documented JSON schema (which is nice to have anyway). Again, very high quality. Someone recently joked we should throw these service requests at a model and let it create PRs to review.

But currently I'm working in Vector and it's Vector remap language... it's enough of a mess that I'm faster working without any copilot "assistance". I think the main issue is that there is very little VRL code out in the open, and the remaps depend on a lot of unseen context, which one would have to work on giving to the LLM. Had similar experiences with OPA and a few more of these DSLs.


> Personally, I wrote 200K lines of my B2B SaaS

That would probably be 1000 line of Common Lisp.


that no one could read


I think that is the 200 lines of the perl version.


you put linefeeds in your perl?


My AI experience has varied wildly depending on the problem I'm working on. For web apps in Python, they're fantastic. For hacking on old engineering calculation code written in C/C++, it's an unmitigated disaster and an active hindrance.


Just last week I asked copilot to make a FastCGI client in C. It gave me 5 times code that did not compile. Afer some massaging I got it to compile, didn’t work. After some changes, works. No I say “i do not want to use libfcgi, just want a simple implementation”. After already one hour wrestling, I realize the whole thing blocks, I want no blocking calls… still half an hour later fighting, I’m slowly getting there. I see the code: a total mess.

I deleted all, wrote from scratch a 350 lines file which wotks.


Context engineering > vibe coding.

Front load with instructions, examples, and be specific. How well you write the prompt greatly determines the output.

Also, use Claude code not copilot.


At some point it becomes easier to just write the code. If the solution was 350 lines, then I'm guessing it was far easier for them to just write that rather then tweak instructions, find examples, etc to cajole the AI to writing workable code (that would then need to be reviewed and tweaked if doing it properly).


Exactly, if I have to write a 340 lines prompt, I could very well start just writing code.


“Just tell it how to write the code and then it will write the code.”

No wonder the vast majority of AI adoption is failing to produce results.


It’s not just you, I think some engineers benefit a lot from AI and some don’t. It’s probably a combination of factors including: AI skepticism, mental rigidity, how popular the tech stack is, and type of engineering. Some problems are going to be very straightforward.

I also think it’s that people don’t know how to use the tool very well. In my experience I don’t guide it to do any kind of software pattern or ideology. I think that just confuses the tool. I give it very little detail and have it do tasks that are evident from the code base.

Sometimes I ask it to do rather large tasks and occasionally the output is like 80% of the way there and I can fix it up until it’s useful.


Yah. Latest thing I wrote was

* Code using sympy to generate math problems testing different skills for students, with difficulty values affecting what kinds of things are selected, and various transforms to problems possible (e.g. having to solve for z+4 of 4a+b instead of x) to test different subskills

(On this part, the LLM did pretty well. The code was correct after a couple of quick iterations, and the base classes and end-use interfaces are correct. There's a few things in the middle that are unnecessarily "superstitious" and check for conditions that can't happen, and so I need to work with the LLM to clean it up.

* Code to use IRT to estimate the probability that students have each skill and to request problems with appropriate combinations of skills and difficulties for each student.

(This was somewhat garbage. Good database & backend, but the interface to use it was not nice and it kind of contaminated things).

* Code to recognize QR codes in the corners of worksheet, find answer boxes, and feed the image to ChatGPT to determine whether the scribble in the box is the answer in the correct form.

(This was 100%, first time. I adjusted the prompt it chose to better clarify my intent in borderline cases).

The output was, overall, pretty similar to what I'd get from a junior engineer under my supervision-- a bit wacky in places that aren't quite worth fixing, a little bit of technical debt, a couple of things more clever that I didn't expect myself, etc. But I did all of this in three hours and $12 expended.

The total time supervising it was probably similar to the amount of time spent supervising the junior engineer... but the LLM turns things around quick enough that I don't need to context switch.


I think it's fair to call code LLM's similar to fairly bad but very fast juniors that don't get bored. That's a serious drawback but it does give you something to work with. What scares me is non-technical people just vibe coding because it's like a PM driving the same juniors with no one to give sanity checks.


> I also think it’s that people don’t know how to use the tool very well.

I think this is very important. You have to look at what it suggests critically, and take what makes sense. The original comment was absolutely correct that AI-generated code is way too verbose and disconnected from the realities of the application and large-scale software design, but there can be kernels of good ideas in its output.


I think a lot of it is tool familiarity. I can do a lot with Cursor but frankly I find out about "big" new stuff every day like agents.md. If I wasn't paying attention or also able to use Cursor at home then I'd probably learn more inefficiently. Learning how to use rule globs versus project instructions was a big learning moment. As I did more LLM work on our internal tools that was also a big lesson in prompting and compaction.

Certain parts of HN and Reddit I think are very invested in nay-saying because it threatens their livelihoods or sense of self. A lot of these folks have identities that are very tied up in being craftful coders rather than business problem solvers.


Junior engineers see better results than senior engineers for obvious reasons.


Junior engineers think they see better results than senior engineers for obvious reasons


I think its down to language and domain more than tools.

No model ive tried can write, usefully debug or even explain cmake. (It invents new syntax if it gets stuck, i often have to prompt multiple AI to know if even the first response in the context was made-up)

My luck with embedded c has been atrocious for existing codebase (burning millions of tolkens), but passable for small scripts. (Arduino projects)

My experience with python is much better. Suggesting relevant libraries and functions, debugging odd errors, or even making small script on its own. Even the original github copilot which i got access to early was excellent on python.

Alot of people that seem to have fully embraced agentic vibe-coding seem to be in the web or node.js domain. Which I've not done myself since pre-AI.

I've tried most (free or trial) major models or schemes in hope that i find any of them useful, but not found much use yet.


> It's like we live in different worlds.

We probably do, yes. the Web domain compared to a cybersecurity firm compared to embedded will have very different experiences. Because clearly there's a lot more code to train on for one domain than the other (for obvious reasons). You can have colleagues at the same company or even same team have drastically different experiences because they might be in the weeds on a different part of tech.

> I then carefully review and test.

If most people did this, I would have 90% less issues with AI. But as we expect, people see shortcuts and use them to cut corners, not give more times to polish the edges.


I think people react to AI with strong emotions, which can come from many places, anxiety/uncertainty about the future being a common one, strong dislike of change being another (especially amongst autists, whom I would guess based on me and my friend circle are quite common around here). Maybe it explains a lot of the spicy hot-takes you see here and on lobsters? People are unwilling to think clearly or argue in good faith when they are emotionally charged (see any political discussion). You basically need to ignore any extremist takes entirely, both positive and negative, to get a pulse on what's going on.

If you look, there are people out there approaching this stuff with more objectivity than most (mitsuhiko and simonw come to mind, have a look through their blogs, it's a goldmine of information about LLM-based systems).


What tech stack do you use?

Betting in advance that it's JavaScript or Python, probably with very mainstream libraries or frameworks.


FWIW. Claude Code does great job for me on complex domain Rust projects, but I just use it one relatively small feature/code chunk at the time, where oftentimes it can pick up existing patterns etc. (I try to point it at similar existing code/feature if I have it). I do not let it write anything creative where it has to come up with own design (either high-level architectural, or low level facilities). Basically I draw the lines manually, and let it color the space between, using existing reference pictures. Works very, very well for me.


Is this meant to detract from their situation? These tech stacks are mainstream because so many use them... it's only natural that AI would be the best at writing code in contexts where it has the most available training data.


> These tech stacks are mainstream because so many use them

That's a tautology. No, those tech stacks are mainstream because it is easy to get something that looks OK up and running quickly. That's it. That's what makes a framework go mainstream: can you download it and get something pretty on the screen quickly? Long-term maintenance and clarity is absolutely not a strong selection force for what goes mainstream, and in fact can be an opposing force, since achieving long-term clarity comes with tradeoffs that hinder the feeling of "going fast and breaking things" within the first hour of hearing about the framework. A framework being popular means it has optimized for inexperienced developers feeling fast early, which is literally a slightly negative signal for its quality.


No, it's a clarification. There is massive difference between domains, and the parent post did not specify.

If the AI can only decently do JS and Python, then it can fully explain the observed disparity in opinion of its usefulness.


You are exactly right in my case - JavaScript and Python dealing with the AWS CDK and SDK. Where there is plenty of documentation and code samples.

Even when it occasionally gets it wrong, it’s just a matter of telling ChatGPT - “verify your code using the official documentation”.

But honestly, even before LLMs when deciding on which technology, service, or frameworks to use I would always go with the most popular ones because they are the easiest to hire for, easiest to find documentation and answers for and when I myself was looking for a job, easiest to be the perfect match for the most jobs.


Yeah, but most devs are working on brownfield projects where they did not choose any part of the tech stack.


They can choose jobs. Starting with my 3rd job in 2008, I always chose my employer based on how it would help me get my n+1 job and that was based on tech stack I would be using.

Once a saw a misalignment between market demands and current tech stack my employer was using, I changed jobs. I’m on job #10 now.


If one wants to optimise career, isn't it better to become an expert in the _less_ mainstream technologies that not-everyone can use?


Honestly, now that I think about it, I am using a pre-2020 playbook. I don’t know what the hell I would do these days if I were still a pure developer without the industry connections and having AWS ProServe experience on my resume.

While it is true that I got a job quickly in 2023 and last year when I was looking, while I was interviewing for those two, as a Plan B, I was randomly submitting my resume (which I think is quite good) to literally hundreds of jobs through Indeed and LinkedIn Easy Apply and I heard crickets - regular old enterprise dev jobs that wanted C#, Node or Python experience on top of AWS.

I don’t really have any generic strategy for people these days aside from whatever job you are at, don’t be a ticket taker and be over larger initiatives.


When did you get your last 3 jobs?


Mid 2020 - at AWS ProServe the internal consulting arm of AWS - full time job

Late 2023 - full time at a third party AWS consulting company. It took around two weeks after I started looking to get an offer

Late 2024 - “Staff consultant” third party consulting company. An internal recruiter reached out to me.

Before 2020 I was just a run of the mill C#/JS enterprise developer. I didn’t open the AWS console for the first time until mid 2018.


It could be the language. Almost 100% of my code is written by AI, I do supervise as it creates and steer in the right direction. I configure the code agents with examples of all frameworks Im using. My choice of Rust might be disproportionately providing better results, because cargo, the expected code structure, examples, docs, and error messages, are so well thought out in Rust, that the coding agents can really get very far. I work on 2-3 projects at once, cycling through them supervising their work. Most of my work is simulation, physics and complex robotics frameworks. It works for me.


As a practical example, I've recently tried out v0's new updated systems to scaffold a very simple UI where I can upload screenshots from videogames I took and tag them.

The resulting code included an API call to run arbitrary SQL queries against the DB. Even after pointing this out, this API call was not removed or at least secured with authentication rules but instead /just/hidden/through/obscur/paths...


B2B SaaS in most cases are sophisticated masks over some structured data, perhaps with great ux, automation and convenience, so I can see LLMs be more successful there, even so because there is more training data and many processes are streamlined. Not all domains are equal, go try develop a serious game, not the yet another simple and broken arcade, with llms and you'll have a different take


Do you not think part of it is just whether employers permit it or not? My conglomerate employer took a long time to get started and has only just rolled out agent mode in GH Copilot, but even that is in some reduced/restricted mode vs the public one. At the same time we have access to lots of models via an internal portal.


Companies that don't allow their devs to use LLMs will go bankrupt and in the meantime their employees will try to use their private LLM accounts.


I agree, it's like they looked at GPT 3.5 one time and said "this isn't for me"

The big 3 - Opus 4.1 GPT5 High, Gemini 2.5 Pro

Are astonishing in their capabilities, it's just a matter of providing the right context and instructions.

Basically, "you're holding it wrong"


I am also constantly astonished.

That said, observing attempts by skeptics to “unsuccessfully” prompt an LLM have been illuminating.

My reaction is usually either:

- I would never have asked that kind of question in the first place.

- The output you claim is useless looks very useful to me.


Lines of code is not a useful metric for anything. Especially not productivity.

The less code I write to solve a problem the happier I am.


GitHub copilot, Microsoft copilot, Gemini, loveable, gpt, cursor with Claude models, you name it.


It really depends, and can be variable, and this can be frustrating.

Yes, I’ve produced thousands of lines of good code with an LLM.

And also yes, yesterday I wasted over an hour trying to define a single docker service block for my docker-compose setup. Constant hallucination, eventually had to cross check everything and discover it had no idea what it was doing.

I’ve been doing this long enough to be a decent prompt engineer. Continuous vigilance is required, which can sometimes be tiring.


It could be because your job is boilerplate derivatives of well solved problems. Enjoy the next 1 to 2 years because yours is the job Claude is coming to replace.

Stuff Wordpress templates should have solved 5 years ago.


Honestly the best way to get good code at least with typescript and JavaScript is to have like 50 eslint plugins

That way it constantly yells at sonnet 4 to get the code at least in a better state.

If anyone is curious I have a massive eslint config for typescript that really gets good code out of sonnet.

But before I started doing this the code it wrote was so buggy and it was constantly trying to duplicate functions into separate files etc


> 200K lines of my B2B SaaS

I suspect it's not how you're using LLMs that is different, but rather the output you expect. I strongly suspect that if I wrote an application with the exact same functionality as your B2B SaaS, it would be around 20K lines. It's not uncommon to see a difference of 10x lines or more between different developers implementing the same thing, depending on how they code and what they value. My guess is that you like LLMs because they code like you, and others don't like them because they don't.

I'm struggling to even describe just how much 200K lines of code is in a concise, powerful language from a developer who strongly values brevity and clarity. Every unit of code you write increases the expressive power of all the rest of your code. 40k lines of code is not twice as much functionality as 20k lines, it's more like five times as much functionality. Code collapses on itself as you explore and discover it. Codespace folds in on itself like the folds of a multi-dimensional brain. New operators and verbs and abstractions are invented, whose power is combinatorial with all the other abstractions you've created. 200,000 lines of code is so much.


It is quite a putdown to tell someone else that if you wrote their program it would be 10 times shorter.

That's not in keeping with either the spirit of this site or its rules: https://news.ycombinator.com/newsguidelines.html.


Fair: it was rude. Moderation is hard and I respect what you do. But it's also a sentiment several other comments expressed. It's the conversation we're having. Can we have any discussions of code quality without making assumptions about each others' code quality? I mean, yeah, I could probably have done better.

> "That would probably be 1000 line of Common Lisp." https://news.ycombinator.com/item?id=44974495

> "Perhaps the issue is you were used to writing 200k lines of code. Most engineers would be agast at that." https://news.ycombinator.com/item?id=44976074

> "200k lines of code is a failure state ... I'd not normally huff my own farts in public this obnoxiously, but I honestly feel it is useful for the "AI hater vs AI sucker" discussion to be honest about this type of emotion." https://news.ycombinator.com/item?id=44976328


Oh for sure you can talk about this, it's just a question of how you do it. I'd say the key thing is to actively guard against coming across as personal. To do that is not so easy, because most of us underestimate the provocation in our own comments and overestimate the provocation in others (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). This bias is like carbon monoxide - you can't really tell it's affecting you (I don't mean you personally, of course—I mean all of us), so it needs to be consciously compensated for.

As for those other comments - I take your point! I by no means meant to pick on you specifically; I just didn't see those. It's pretty random what we do and don't see.


What a bizarre attack. What makes you think I'm not "a developer who strongly values brevity and clarity"? I've been working on this thing for 9 years. It isn't some CRUD app. It's arrogant and rude of you to think you have any idea how many lines of code my life's work "should" take.

At this rate, don't limit yourself to 20K lines of code. I'm sure you could have written it in 5. Heck, you probably would have solved the problem without writing a line of code at all. That's just how good you are.


I understand the provocation, but please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

Your GP comment was great, and probably the thing to do with a supercilious reply is just not bother responding (easier said than done of course). You can usually trust other users to assess the thread fairly (e.g.https://news.ycombinator.com/item?id=44975623).

https://news.ycombinator.com/newsguidelines.html


> What makes you think I'm not "a developer who strongly values brevity and clarity"

Some pieces of evidence that make me think that:

1. The base rate of developers who write massively overly verbose code is about 99%, and there's not a ton of signal to deviate from that base rate other than the fact that you post on HN (probably a mild positive signal).

2. An LLM writes 80% of your code now, and my prior on LLM code output is that it's on par with a forgetful junior dev who writes very verbose code.

3. 200K lines of code is a lot. It just is. Again, without more signal, it's hard to deviate from the base rate of what 200K-line codebases look like in the wild. 99.5% of them are spaghettified messes with tons of copy-pasting and redundancy and code-by-numbers scaffolded code (and now, LLM output).

This is the state of software today. Keep in mind the bad programmers who make verbose spaghettified messes are completely convinced they're code-ninja geniuses; perhaps even more so than those who write clean and elegant code. You're allowed to write me off as an internet rando who doesn't know you, of course. To me, you're not you, you're every programmer who writes a 200k LOC B2B SaaS application and uses an LLM for 80% of their code, and the vast, vast majority of those people are -- well, not people who share my values. Not people who can code cleanly, concisely, and elegantly. You're a unicorn; cool beans.

Before you used LLMs, how often were you copy/pasting blocks of code (more than 1 line)? How often were you using "scaffolds" to create baseline codefiles that you then modified? How often were you copy/pasting code from Stack Overflow and other sources?


At least to me what you said sounded like 200k is just with LLMs but before agents. But it's a very reasonable amount of code for 9 years of work.


This is such a bizarre comment. You have no idea what code base they are talking about, their skill level, or anything.


> I'm struggling to even describe... 200,000 lines of code is so much.

The point about increasing levels of abstractions is a really good one, and it's worth considering whether any new code that's added is entirely new functionality, some kind of abstraction over some existing functionality (that might then reduce the need for as new code), or (for good or bad reason) some kind of copy of some of the existing behaviour but re-purposed for a different use case.


200kloc is what, 4 reams of paper, double sided? So, 10% of that famous Margaret Hamilton picture (which is roughly "two spaceships worth of flight code".) I'm not sure the intuition that gives you is good but at least it slots the raw amount in as "big but not crazy big" (the "9 years work" rather than "weekend project" measurement elsethread also helps with that.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: