What I don't understand about this whole "get on board the AI train or get left behind" narrative, what advantage does an early adopter have for AI tools?
The way I see it, I can just start using AI once they get good enough for my type of work. Until then I'm continuing to learn instead of letting my brain atrophy.
This is a pretty common position: "I don't worry about getting left behind - it will only take a few weeks to catch up again".
I don't think that's true.
I'm really good at getting great results out of coding agents and LLMs. I've also been using LLMs for code on an almost daily basis since ChatGPT's release on November 30th 2022. That's more than three years ago now.
Meanwhile I see a constant flow of complaints from other developers who can't get anything useful out of these machines, or find that the gains they get are minimal at best.
Using this stuff well is a deep topic. These things can be applied in so many different ways, and to so many different projects. The best asset you can develop is an intuition for what works and what doesn't, and getting that intuition requires months if not years of personal experimentation.
I don't think you can just catch up in a few weeks, and I do think that the risk of falling behind isn't being taken seriously enough by much of the developer population.
I'm glad to see people like antirez ringing the alarm bell about this - it's not going to be a popular position but it needs to be said!
Why can't both be true at the same time? Maybe their problems are more complex than yours. Why do you assume it's a skill issue and ignore the contextual variables?
So far every new AI product and even model update has required me to relearn how to get decent results out of them. I'm honestly kind of sick of having to adjust my work flow every time.
The intuition just doesn't hold. The LLM gets trained and retrained by other LLM users so what works for me suddenly changes when the LLM models refresh.
LLMs have only gotten easier to learn and catch up on over the years. In fact, most LLM companies seem to optimise for getting started quickly over getting good results consistently. There may come a moment when the foundations solidify and not bothering with LLMs may put you behind the curve, but we're not there yet, and with the literally impossible funding and resources OpenAI is claiming they need, it may never come.
I don't see how your position is compatible with the constant hype about the ever-growing capabilities of LLMs. Either they are improving rapidly, and your intuition keeps getting less and less valuable, or they aren't improving.
They're improving rapidly, which means your intuition needs to be constantly updated.
Things that they couldn't do six months go might now be things that they can do - and knowing they couldn't do X six months ago is useful because it helps systematize your explorations.
A key skill here is to know what they can do, what they can't do and what the current incantations are that unlock interesting capabilities.
A couple I've learned in the past week:
1. Don't give Claude Code a URL to some code and tell it to use that, because by default it will use its WebFetch tool but that runs an extra summarization layer (as a prompt injection defense) which loses details. Telling it to use curl sometimes works but a guaranteed trick is to have it git clone the relevant repo to /tmp and look at the code there instead.
2. Telling Claude Code "use red/green TDD" is a quick to type shortcut that will cause it to write tests first, run them and watch them fail, then implement the feature and run the test again. This is a wildly effective technique for getting code that works properly while avoiding untested junk code that isn't needed.
Now multiply those learnings by three years. Sure, the stuff I figure out in 2023 mostly doesn't apply today - but the skills I developed in learning how to test and iterate on my intuitions from then still count and still keep compounding.
The idea that you don't need to learn these things because they'll get better to the point that they can just perfectly figure out what you need is AGI science fiction. I think it's safe to ignore.
I feel like both of these examples are insights that won't be relevant in a year.
I agree that CC becoming omniscient is science fiction, but the goal of these interfaces is to make LLM-based coding more accessible. Any strategies we adopt to mitigate bad outcomes are destined to become part of the platform, no?
I've been coding with LLMs for maybe 3 years now. Obviously a dev who's experienced with the tools will be more adept than one who's not, but if someone started using CC today, I don't think it would take them anywhere near that time to get to a similar level of competency.
You're right, it's difficult to get "left behind" when the tools and workflows are being constantly reinvented.
You'd be sage with your time just to keep a high-level view until workflows become stable and aren't advancing every few months.
The time to consider mastering a workflow is when a casual user of the "next release" wouldn't trivially supersede your capabilities.
Similarly we're still in the race to produce a "good enough" GenAI, so there isn't value in mastering anything right now unless you've already got a commercial need for it.
This all reminds me of a time when people were putting in serious effort to learn Palm Pilot's Graffiti handwriting recognition, only for the skill to be made redundant even before they were proficient at it.
I think that who says that you need to be accustomed to the current "tools" related to AI agents, is suffering from a horizon effect issue: these stuff will change continuously for some time, and the more they evolve, the less you need to fiddle with the details. However, the skill you need to have, is communication skills. You need to be able to express yourself and what matters for your project fast and well. Many programmers are not great at communication. In part this is a gift, something you develop at small age, and this will, I believe, kinda change who is good at programming: good communicators / explorers may not have a edge VS very strong coders that are bad at explaining themselves. But a lot of it is attitude, IMHO. And practice.
> Many programmers are not great at communication.
This is true, but still shocking. Professional (working with others at least) developers basically live or die by their ability to communicate. If you're bad at communication, your entire team (and yourself) suffer, yet it seems like the "lone ranger" type of programmer is still somewhat praised and idealized. When trying to help some programmer friends with how they use LLMs, it becomes really clear how little they actually can communicate, and for some of them I'm slightly surprised they've been able to work with others at all.
An example the other day, some friend complained that the LLM they worked with was using the wrong library, and using the wrong color for some element, and surprised that the LLM wouldn't know it from the get go. Reading through the prompt, they never mentioned it once, and when asked about it, they thought "it should have been obvious" which yeah, to someone like you who worked for 2 years on this project that might be obvious, but for some with zero history and zero context about what you do? How you expect it to know this? Baffling sometimes.
Yup. I'd take a gander than most complaints by people who have even used LLMs for long time can be resolved by "describe your thing in detail". LLM's are such a relief on my wrists that I often get tempted to write short prompts and pray that the LLM divines my thoughts. I always get much better results in a lot faster time when i just turn on the mic and have whisper transcribe a couple minutes of my speaking though.
I am using Google Antigravity for the same type of work you mention, such as many things and ideas I had over the years but I couldn't justify the time I needed to invest into them. Pretty non-trivial ideas and yet with a good problem definition communication skills I am getting unbelievable results. I am even intentionally sometimes being too vague in my problem definition to avoid introducing the bias to the model and the ride has been quite crazy so far. In 2 days I've implemented several substantial improvements that i had in my head for years.
The world changed for good and we will need to adapt. The bigger and more important question at this point isn't anymore if LLMs are good enough, for the ones who want to see, but, as you mention in your article, is what will happen to people who will get unemployed. There's a reality check for all of us.
I've used cursor and claude code both daily[0] within a month of their releases - i'm learning something new on how to work with and apply the tools almost every day.
I don't think it's a coincidence that some of the best developers[1] are using these tools and some openly advocating for them because it still requires core skills to get the most out of them
I can honestly say that building end-to-end products with claude code has made me a better developer, product designer, tester, code reviewer, systems architect, project manager, sysadmin etc. I've learned more in the past ~year than I ever have in my career.
[0] abandoned cursor late last year
[1] see Linus using antigravity, antirez in OP, Jared at bun, Charlie at uv/ruff, mitushiko, simonw et al
I started heavy usage in April 2025 (Codex CLI -> some Claude Code and trying other CLIs + a bit of Cursor -> Warp.dev -> Claude Code) and I’m still learning as well (and constantly trying to get more efficient)
(I had been using GitHub Copilot for 5+ years already, started as an early beta tested, but I don’t really consider that the same)
I like to say it’s like learning a programming language. it takes time, but you start pattern matching and knowing what works. it took me multiple attempts and a good amount of time to learn Rust, learning effective use of these tools is similar
I’ve also learned a ton across domains I otherwise wouldn’t have touched
AI development is about planning, orchestration and high throughput validation. Those skills won't go away, the quality floor of model output will just rise over time.
My take: learning how to do LLM-assisted coding at a basic level gets you 80% of the returns, and takes about 30 minutes. It's a complete no-brainer.
Learning all of the advanced multi-agent worklows etc. etc... Maybe that gets you an extra 20%, but it costs a lot more time, and is more likely to change over time anyway. So maybe not very good ROI.
It took me a few months of working with the agents to get really productive with it. The gains are significant. I write highly detailed specs (equiv multiple A4 pages) in markdown and dicate the agent hierarchy (which agent does what, who reports to who).
I've learned a lot of new things this year thanks to AI. It's true that the low levels skills with atrophy. The high level skills will grow though; my learning rate is the same, just at a much higher abstraction level; thus covering more subjects.
The main concern is the centralisation. The value I can get out of this thing currently well exceeds my income. AI companies are buying up all the chips. I worry we'll get something like the housing market where AI will be about 50% of our income.
We have to fight this centralisation at all costs!
This is something I think a lot of people don't seem to notice, or worry about, the moving of programming as a local task, to one that is controlled by big corporations, essentially turning programming into a subscription model, just like everything else, if you don't pay the subscription you will no longer be able to code i.e. PaaS (Programming as a Service). Obviously at the moment most programmers can still code without LLMs, but when autocomplete IDEs became main stream, it didn't take long before a large proportion of programmers couldn't program without an autocomplete IDE, I expect most new programmers coming in won't be able to "program" without a remote LLM.
That ignores the possibility that local inference gets good enough to run without a subscription on reasonably priced hardware.
I don't think that's too far away. Anthropic, OpenAI, etc. are pushing the idea that you need a subscription but if opensource tools get good enough they could easily become an expensive irrelivance.
There is that, but the way this usually works is that there is always a better closed service you have to pay for, and we see that with LLMs as well. Plus there is the fact that you currently need a very powerful machine to run these models at anywhere near the speed of the PaaS systems, and I'm not convinced we'll be able to do the Moore's law style jumps required to get that level of performance locally, not to mention the massive energy requirements, you can only go so small, and we are getting pretty close to the limit. Perhaps I'm wrong, but we don't see the jumps in processing power we used to see in the 80s and 90s, due to clock speed jumps, the clock speed of most CPUs has stayed pretty much the same for a long time. As LLMs are essentially probabilistic in nature, this does open up options not available to current deterministic CPU designs, so that might be an avenue which gets exploited to bring this to local development.
My concern is that inference hardware is becoming more and more specialized and datacenter-only. It won’t be possible any longer to just throw in a beefy GPU (in fact we’re already past that point).
This is the most valid criticism. Theoretically in several years we may be able to run Opus quality coding models locally. If that doesn't happen then yes, it becomes a pay to play profession - which is not great.
The hardware needs to catch up I think. I asked ChatGPT (lol) how much it would cost to build a Deepseek server that runs at a reasonable speed and it quoted ~400k-800k(8-16 H100 + the rest of the server).
Guess we are still in the 1970s era of AI computing. We need to hope for a few more step changes or some breakthrough on model size.
The problem is that Moore's law is dead, silicon isn't advancing as fast as what we've envisioned in the past, we're experiencing all sorts of quantum tunneling effects in order to cram as much microstructure as possible into silicon, and R&D for manufacturing these chips are climbing at a rapid rate. There's a limit to how we can fight against Physics, and unless we discover a totally new paradigm to alleviate this issues (ex. optical computing?) we're going to experience diminishing returns at the end of the sigmoid-like tech advancement cycle.
You can run most open models (excluding kimi-k2) on hardware that costs anywhere from 45 - 85k (tbf, specced before the vram wars of late 2025 so +10k maybe?). 4-8 PRO6000s + all the other bits and pieces gives you a machine that you can host locally and run very capable models, at several quants (glm4.7, minimax2.1, devstral, dsv3, gpt-oss-120b, qwens, etc.), with enough speed and parallel sessions for a small team (of agents or humans).
Well, if you're programming without AI you need to understand what you're building too, lest you program yourself into a corner. Taking 3-5 minutes to speech-to-text an overview of why you want to build what exactly, using which general philosophies/tool seems like it should cost you almost zero extra time and brainpower
The idea, I think, is to gain experience with the loop of communicating ideas in natural language rather than code, and then reading the generated code and taking it as feedback.
It's not that different overall, I suppose, from the loop of thinking of an idea and then implementing it and running tests; but potentially very disorienting for some.
An ecosystem is being built around AI : Best prompting practices, mcps, skills, IDE integration, how to build a feedback loop so that LLM can test its output alone, plug to the outside world with browser extensions, etc...
For now i think people can still catch up quickly, but at the end of 2026 it's probably going to be a different story.
> Best prompting practices, mcps, skills, IDE integration, how to build a feedback loop so that LLM can test its output alone, plug to the outside world with browser extensions, etc...
Ah yes, an ecosystem that is fundamentally inherently built on probabilisitic quick sand and even with the "best prompting practices", you still get agents violating the basics of security and committing API keys when they were told not to. [0]
I have tons of examples of AI not committing secrets. this is one screenshot from twitter? I don’t think it makes your point
CPUs are billions of transistors. sometimes one fails and things still work. “probabilistic quicksand” isn’t the dig you think it is to people who know how this stuff works
> I have tons of examples of AI not committing secrets.
"Trust only me bro".
It takes 10 seconds to see the many examples of API keys + prompts on GitHub to verify that tweet. The issue with AI isn't limited to that tweet which demonstrates its probabilistic nature; Otherwise why do need a sandbox to run the agent in the first place?
Nevermind, we know why: Many [0] such [1] cases [2]
> CPUs are billions of transistors. sometimes one fails and things still work. “probabilistic quicksand” isn’t the dig you think it is to people who know how this stuff works
Except you just made a false equivalence. CPUs can be tested / verified transparently and even if it does go wrong, we know exactly why. Where as you can't explain why the LLM hallucinated or decided to delete your home folder because the way it predicts what it outputs is fundamentally stochastic.
you could find tons of API keys on GitHub before these “agentic” tools too. that was my point, one screenshot from twitter vs one anecdote from me. I don’t think either proves the point, but posting a screenshot from twitter like it’s proof of some widespread problem is what I was responding to (N=2, 1 vs 1)
my point is more “skill issue” than “trust me this never happens”
my point on CPUs is people who don’t understand LLMs talk like “hallucinations” are a real thing — LLMs are “deciding” to make stuff up rather than just predicting the next token. yes it’s probabilistic, so is practically everything else at scale. yet it works and here we are. can you really explain in detail how everything you use works? I’m guessing I can explain failure modes of agentic systems (and how to avoid them so you don’t look silly on twitter/github) and how neural networks work better than most people can explain the technology they use every day
> you could find tons of API keys on GitHub before these “agentic” tools too. that was my point, one screenshot from twitter vs one anecdote from me. I don’t think either proves the point, but posting a screenshot from twitter like it’s proof of some widespread problem is what I was responding to (N=2, 1 vs 1)
That doesn't refute the probabilistic nature of LLMs despite best prompting practices. In fact it emphasises it. More like your 1 anecdotal example vs my 20+ examples on GitHub.
My point tells you that not only it indeed does happen, but a previous old issue is now made even worse and more widespread, since we now have vibe-coders without security best practices assuming the agent should know better (when it doesn't).
> my point is more “skill issue” than “trust me this never happens”
So those that have this "skill issue" are also those who are prompting the AI differently then? Either way, this just inadvertently proves my whole point.
> yes it’s probabilistic, so is practically everything else at scale. yet it works and here we are.
The additional problem is can you explain why it went wrong as you scale the technology? CPUs circuit design go through formal verification and if a fault happens, we know exactly why; hence it is deterministic in design which makes them reliable.
LLMs are not and don't have this. Which is why OpenAI had to describe ChatGPT's misaligned behaviour as "sycophancy", but could not explain why it happened other than tweaking the hyper-parameters which got them that result.
So LLMs being fundamentally probabilistic and are hence, more unexplainable being the reason why you have the screenshot of vibe-coders who somehow prompted it wrong and the agent committed the keys.
Maybe that would never have happened to you, but it won't be the last time we see more of this happening on GitHub.
I was pointing out one screenshot from twitter isn’t proof of anything just to be clear; it’s a silly way to make a point.
yes AI makes leaking keys on GH more prevalent, but so what? it’s the same problem as before with roughly the same solution
I’m saying neural networks being probabilistic doesn’t matter — everything is probabilistic. you can still practically use the tools to great effect, just like we use everything else that has underlying probabilities
OpenAI did not have to describe it as sycophancy, they chose to, and I’d contend it was a stupid choice
and yes, you can explain what went wrong just like you can with CPUs. we don’t (usually) talk about quantum-level physics when discussing CPUs; talking about neurons in LLMs is the wrong level of abstraction
If you listen to promises like that you're going get burned.
One of the key skills needed in working with LLMs is learning to ignore the hype and marketing and figure out what these things are actually capable of, as opposed to LinkedIn bluster and claims from CEOs who's net worth are tied to investor sentiment in their companies.
If someone spends more time talking about "AGI" then what they're actually building, filter that person out.
>One of the key skills needed in working with LLMs is learning to ignore the hype and marketing and figure out what these things are actually capable of
This is precisely what led me to realize that while they have some use for code review and analyzing docs, for coding purposes they are fairly useless.
The hypesters responses' to this assertion exclusively into 5 categories. Ive never heard a 6th.
I agree about skills actually, but it's also obvious that parent is making a very real point that you cannot just dismiss. For several years now and far short of wild AGI promises, the answer to literally every issue with casual or production AI has been something like "but the rate of model improvement.." or "but the tools and ecosystem will evolve.."
If you believe that uncritically about everything else, then you have to answer why agentic workflows or MCP or whatever is the one thing that it can't evolve to do for us. There's a logical contradiction here where you really can't have it both ways.
I’m not understanding your point… (and would be genuinely curious to)? the models and systems around them have evolved and gotten better (over the past few years for LLMs and decades for “AI” more broadly)
oh I think I do get your point now after a few rereads (correct if wrong but you’re saying it should keep getting better until there’s nothing for us to do). “AI”, and computer systems more broadly, are not and cannot be viable systems. they don’t have agency (ironically) to affect change in their environment (without humans in the loop). computer systems don’t exist/survive without people. all the human concerns around what/why remain, AI is just another tool in a long line of computer systems that make our lives easier/more efficient
OpenAI is going to get to AGI. And AGI should in minutes build a system that takes vague input and produces fully functioning product out of it. Isn't singularity being promised by them?
you’re just repeating the straw man. if you can’t think critically and just regurgitate every dumb thing you hear idk what to tell you. nobody serious thinks a “singularity” is coming. there’s not even a proper definition of “AGI”
your argument amounts to “some people said stupid shit one time and I took it seriously”
> What I don't understand about this whole "get on board the AI train or get left behind" narrative, what advantage does an early adopter have for AI tools?
The ones pushing this narrative have either the following:
* Invested in AI companies (which they will never disclose until they IPO / acquired)
* Employees at AI companies that have stock options which they are effectively paid boosters around AGI nonsense.
* Mid-life crisis / paranoia that their identity as a programmer is being eroded and have to pivot to AI.
It is no different to the crypto web3 bubble of 2021. This time, it is even more obvious and now the grifters from crypto / tech are already "pivoting to ai". [0]
Web3 generated plenty of use if you're in on it. Pension funds, private investors, public companies, governments, gambling addicts, teenagers with more pocket money than sense, they've all moved billions into the pockets of Web3 grifters. You follow a tutorial on YouTube, spam the right places, maybe buy a few illegal ads, do a quick rugpull, and if you did your opsec right, you're now a millionaire. The major money sources have started to dry up (although the current American regime has been paid off by crypto companies so a Web3 revival might just happen).
With AI companies still selling services far below cost, it's only a matter of time before the money runs out and the true value of these tools will be tested.
Comparing crypto and web3 scam with AI advancements is disingenuous at its best. I am a long time C and C++ systems programming engineer oriented at (sometimes novel) algorithmic design and high-performance large-scale systems operating at the scale of internet. I am specializing in low-level details that generally very small amount of engineers around the globe are familiar with. We can talk at the level of CPU microarchitectural details or memory bank conflicts or OS internals, and all the way up to the line of code we are writing. AI is the most transformative technology ever designed. I'd go that far and say that not even industrial revolution is going to be comparable to it. I have no stakes in AI.
People here work on all kinds of industries. Some of us are implementing JIT compilers, mission-critical embedded systems or distributed databases. In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.
Yes, it would be nice to have a lot more context (pun intended) when people post how many LoC they introduced.
B2B SaaS? Then can I assume that a browser is involved and that a big part of that 200k LoC is the verbose styling DSL we all use? On the other hand, Nginx, a production-grade web server, is 250k LoC (251,232 to be exact [1]). These two things are not comparable.
The point being that, as I'm sure we all agree, LoC is not a helpful metric for comparison without more context, and different projects have vastly different amounts of information/feature density per LoC.
I primarily work in C# during the day but have been messing around with simple Android TV dev on occasion at night.
I’ve been blown away sometimes at what Copilot puts out in the context of C#, but using ChatGPT (paid) to get me started on an Android app - totally different experience.
Stuff like giving me code that’s using a mix of different APIs and sometimes just totally non-existent methods.
With Copilot I find sometimes it’s brilliant but it’s so random as to when that will be it seems.
> Stuff like giving me code that’s using a mix of different APIs and sometimes just totally non-existent methods.
That has been my experience as well. We can control the surprising pick of APIs with basic prompt files that clarify what and how to use in your project. However, when using less-than-popular tools whose source code is not available, the hallucinations are unbearable and a complete waste of time.
The lesson to be learned is that LLMs depend heavily on their training set, and in a simplistic way they at best only interpolate between the data they were fed. If a LLM is not trained with a corpus covering a specific domain them you can't expect usable results from it.
This brings up some unintended consequences. Companies like Microsoft will be able to create incentives to use their tech stack by training their LLMs with a very thorough and complete corpus on how to use their technologies. If Copilot does miracles outputting .NET whereas Java is unusable, developers have one more reason to adopt .NET to lower their cost of delivering and maintaining software.
Pretty ironic you and the GP talk about lines of code.
From the article:
Garman is also not keen on another idea about AI – measuring its value by what percentage of code it contributes at an organization.
“It’s a silly metric,” he said, because while organizations can use AI to write “infinitely more lines of code” it could be bad code.
“Often times fewer lines of code is way better than more lines of code,” he observed. “So I'm never really sure why that's the exciting metric that people like to brag about.”
I'm with Garman here. There's no clean metric for how productive someone is when writing code. At best, this metric is naive, but usually it is just idiotic.
Bureaucrats love LoC, commits, and/or Jira tickets because they are easy to measure but here's the truth: to measure the quality of code you have to be capable of producing said code at (approximately) said quality or better. Data isn't just "data" that you can treat as a black box and throw in algorithms. Data requires interpretation and there's no "one size fits all" solution. Data is nothing without its context. It is always biased and if you avoid nuance you'll quickly convince yourself of falsehoods. Even with expertise it is easy to convince yourself of falsehoods. Without expertise it is hopeless. Just go look at Reddit or any corner of the internet where there's armchair experts confidently talking about things they know nothing about. It is always void of nuance and vastly oversimplified. But humans love simplicity. You need to recognize our own biases.
> Pretty ironic you and the GP talk about lines of code.
I was responding specifically to the comment I replied to, not the article, and mentioning LoC as a specific example of things that don't make sense to compare.
Made me think of a post from a few days ago where Pournelle's Iron Law of Bureaucracy was mentioned[0]. I think vibe coders are the second group. "dedicated to the organization itself" as opposed to "devoted to the goals of the organization". They frame it as "get things done" but really, who is not trying to get things done? It's about what is getting done and to what degree is considered "good enough."
On the other hand, fault-intolerant codebases are also often highly defined and almost always have rigorous automated tests already, which are two contexts where coding agents specifically excel in.
We really need to add some kind of risk to people making these claims to make it more interesting. I listened to the type of advice you're giving here on more occasions than I can remember, at least once for every major revision of every major LLM and always walked away frustrated because it hindered me more than it helped.
> This is actually amazing now, just use [insert ChatGPT, GPT-4, 4.5, 5, o1, o3, Deepseek, Claude 3.5, 3.9, Gemini 1, 1.5, 2, ...] it's completely different from Model(n-1) you've tried.
I'm not some mythical 140 IQ 10x developer and my work isn't exceptional so this shouldn't happen.
The dark secret no one from the big providers wants to admit is that Claude is the only viable coding model. Everything else descends into a mess of verbose spaghetti full of hallucinations pretty quickly. Claude is head and shoulders above the rest and it isn't even remotely close, regardless of what any benchmark says.
Tried about four others, and to some extent I always marveled about capabilities of latest and greatest I had to concede they didn’t make faster. I think Claude does.
That poster isn't comparing models, he's comparing Claude Code to Cline (two agentic coding tools), both using Claude Sonnet 4. I was pretty much in the same boat all year as well; using Cline heavily at work ($1k+/month token spend) and I was sold on it over Claude Code, although I've just recently made the switch, as Claude Code has a VSCode extension now. Whichever agentic tooling you use (Cline, CC, Cursor, Aider, etc.) is still a matter of debate, but the underlying model (Sonnet/Opus) seems to be unanimously agreed on as being in a league of its own, and has been since 3.5 released last year.
I've been working on macOS and Windows drivers. Can't help but disagree.
Because of the absolute dearth of high-quality open-source driver code and the huge proliferation of absolutely bottom-barrel general-purpose C and C++, the result is... Not good.
On the other hand, I asked Claude to convert an existing, short-ish Bash script to idiomatic PowerShell with proper cmdlet-style argument parsing, and it returned a decent result that I barely had to modify or iterate on. I was quite impressed.
Garbage in, garbage out. I'm not altogether dismissive of AI and LLMs but it is really necessary to know where and what their limits are.
I found the opposite - I am able to get 50% improvement in productivity for day to day coding (mix of backend, frontend), mostly in Javascript but have helped in other languages. But you have to carefully review though - and have extremely well written test cases if you have to blindly generate or replace existing code.
> In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.
This is a false premise. LLMs themselves don't force you to introduce breaking changes into your code.
In fact, the inception of coding agents was lauded as a major improvement to the developer experience because they allow the LLMs themselves to automatically react to feedback from test suites, thus speeding up how code was implemented while preventing regressions.
If tweaking your code can result in breaking a million things, this is a problem with your code and how you worked to make it resilient. LLMs are only able to introduce regressions if your automated tests are unable to catch any of these million of things breaking. If this is the case then your problems are far greater than LLMs existing, and at best LLMs only point out the elephant in the room.
You don't have to be a star programmer, fame isn't the only form of leverage.
If you're in demand, and you're good at what you do, the road is paved for you. Top companies have already set the bar.
Them: we offer 250k-350k
Me: I don't consider anything below 500
The answers I get vary. Some tell me to politely fvck off. Some tell me they need to discuss with leadership. Some just go for it because they know how hard it is to fill that role.
The justification is simple: why would I take a job with you if I can land an HFT gig at twice the pay?
Many people working in these companies are as rank-and-file as it gets. Non existent public profile, no open source contributions, no flashy portfolio.
That's fair! I had interpreted OP's post more along the lines of, well, workers who aren't that in-demand or that high up the pay scale - I think it's fair to say 300k salaries and HFT gigs are out of consideration for most of us.
The proliferation of AI and LLMs has completely obliterated leverage.
Don’t want the job for the salary offered? Too bad. Hire a cheaper person armed to the teeth with the best LLM coding tools and move on.
Unless you’re coming in with significant clout that will move revenue and relations to bridge partnerships across other companies, you will not be worth the extra $250k on skills alone.
This is situational - as a strong engineer I have more leverage than ever to demand ever more eye watering compensation.
Weaker engineers and junior engineers are in more the situation you describe. This is tough and I feel for these folks but it is possible for many people to become stronger engineers if they choose to put in the work.
I'd encourage you to not take on a feeling of hopelessness here.
I agree with you to an extent. I was recently laid off (start up out of money) along with the rest of the engineering team. So I’m going through loops. The hardest part of the process for me is getting past the initial recruiter. Once I get past whatever “wall” they’ve put in place, I do pretty well. So I wonder how many recruiters now are using AI to screen applicants? Given the hundreds (thousands?) of applications, probably some of them.
Anecdotally - I’ve been through 2 technical screens where they asked me to use an AI prompt to solve a problem. For one, it was a pretty trivial problem (running an hmac over some values) so I just solved it directly. They asked why I didn’t use AI, and I told them honestly that I’ve done something like this hundreds of times, why would I use a prompt for it? Didn’t make it to the next round. Now it’s totally possible that I didn’t make it because of something else. And maybe those were outliers, but it seems like I’ll need to brush up on prompting…
That's what bottom-tier companies always tell me. Last decade it used to be outsourcing. I was getting low balled left and right with phrases like "I can pay a guy from Asia a lot less for the same work".
If you want equalized poverty, feel free to move to the EU. Say goodbye to owning a nice house, or building any kind of wealth - that's reserved for the old money class.
In the US, software is one of the few remaining ways to achieve the American dream. I came to this country to work hard and earn money.
I live in Boston where I make double(-ish) the household median income ($80k to $100k). For individual median incomes, I make $140k more. I'm able to save over half my monthly income and it's still not enough. I absolutely can't imagine living in this city on anything less and I don't exactly live a life of exuberance here.
Privacy rights in the EU are being eroded as we speak. Unless people there get off their high horse, they'll succumb to the same level of authoritarianism and surveillance as in the states.
Also, sorry, but the idea that EU countries are in any position to build a serious hyperscaler is pure fiction. Growth, funding, risk, innovation - those are alien concepts to European entrepreneurs.
> Unless people there get off their high horse, they'll succumb to the same level of authoritarianism and surveillance as in the states.
We are very far away from the status quo in the US. Some countries are overtaken by extreme right, which is worrying. But it's nothing like the US where the entire country went to shit overnight.
Also, we don't have this singular president entity which has so much power that everything can be turned upside down in just one election. We have a president but she has very little power and influence compared to the way it is in the US.
Also, our multi-party system prevents the two-party zero-sum setup that is present in the US where parties go to ever extreme methods to make the other side look bad (because a lose for one is a win for the other). For us it doesn't work that way.
> Privacy rights in the EU are being eroded as we speak. Unless people there get off their high horse, they'll succumb to the same level of authoritarianism and surveillance as in the states.
Last time I checked not even the US is proposing to install AI agents on everybody's phone to surveil your encrypted messages (look up chat control, last meeting not even 2 months ago). Soon people will start looking for non-EU VPNs to install Signal (the CEO said they would leave EU if the law passed).
> Also, sorry, but the idea that EU countries are in any position to build a serious hyperscaler is pure fiction. Growth, funding, risk, innovation - those are alien concepts to European entrepreneurs.
Disagree, some of the EU clouds are already well on their way.
Yes ChatControl is a worry indeed. But we have been successfully fighting it for a long time. And it is only pushed by a small number of politicians.
Surprisingly enough the drive to do this does not come from within Europe but from the US (Ashton Kutcher and "Thorn"). They have managed to pocket some influential politicians.
Not sure why you're getting downvoted. This is factually true:
Chat control: EU Ombudsman criticises revolving door between Europol and chat control tech lobbyist Thorn
> Breyer welcomes the outcome: “When a former Europol employee sells their internal knowledge and contacts for the purpose of lobbying personally known EU Commission staff, this is exactly what must be prevented. Since the revelation of ‘Chatcontrol-Gate,’ we know that the EU’s chat control proposal is ultimately a product of lobbying by an international surveillance-industrial complex. To ensure this never happens again, the surveillance lobbying swamp must be drained.”
> Disagree, some of the EU clouds are already well on their way.
Feel free to drop a few links. Digital EU projects tend to be absolute disasters run by bureaucrats. They always result in some 100 page long document, talking about planning a plan for creating a planning framework. Also throw in the words sovereign and digital transformation, for maximum corpo-political bullshit.
Yes but that's only one tiny aspect of GDPR. Unfortunately this is an aspect where they caved in to corporate lobbying, they should have just mandated the obedience of the "do not track" flag (or a similar thing). That browsers set it by default is not a problem because the whole idea of GDPR is that tracking should be opt-in, not opt-out. But really this is a tiny part of GDPR. It is not just about the web even. And as annoying as the cookiewalls are, they also make the user more aware (I mean, why do you want permission to share my data with 572 "trusted partners"??). It also enforced some concepts that should already have been standard, like the purpose principle, explicit permission ("opt-in") etc.
It has really made companies much more aware of data handling. At work we have data protection officers now, privacy advocates, every app we onboard now has to be reviewed in terms of what the data is used for, where it ends up, if we have agreements with them in terms of what it's used for etc. This is really great because before we had pretty much nothing like that. It was just move fast and break things, including customers' privacy that would get broken. And our company is one that doesn't make any money from tracking our customers, so it wasn't really targeted as us, but it still drove so much improvement.
I think it will become much better now that we are disconnecting europe from US services. The main reason that tracking-informed ads are so much more valuable than context-informed ads, is that Google and Meta etc are promoting them. They control the auctions, and tracking is their moat. Nobody has such pervasive tracking networks as them.
The disconnection from these services could really be the trigger for an EU-based context-informed advertising service.
counterpoint: not everyone needs a hyperscaler. Especially with open source like Kubernetes out there. Of course the more experience companies have managing it, better the service becomes. But I don't see why it can't happen within EU.
I do understand that, my point was that the pieces needed to provide it as a managed service are much easier to come up with in comparison to what AWS had to do with Fargate.
dude, EU is home for around 500 million people. (correct me if I'm wrong). EU definitely needs a hyperscaler. Every single one of these people will need a digital identity along with their compute rights.
This has been done before, and America comes out consistently on top. Even the median purchasing power parity (PPP) in the US is frequently ranked highest in the world. The majority of American households in the poorest US states are doing better than the majority of Europeans.
This gets amplified if you're a highly sought after professional. Top senior engineers are getting paid $500k-$1M in the US. These are figures you'll never find in Europe or Asia, not even close. Put on top the rising costs of living, and 45% top tax brackets (France, UK, Germany, Spain), the US is incomparable.
Yes, if you're a professional in high demand, you can live a great life in the US. But how does the quality of life of everybody below the median look like?
> This gets amplified if you're a highly sought after professional. Top senior engineers are getting paid $500k-$1M in the US. These are figures you'll never find in Europe or Asia, not even close.
But what does that buy you really, in a high cost of living area? What if you ever want to do something else? What if demands for your profession change? How expensive is it to raise children?
I have first hand experience of both the US and Europe, and while nominal salaries are (much) lower in the latter, subjective feelings of safety and quality of life seem much more comparable than the numbers might make you believe.
That said, the US system of highly rewarding relatively few people at the top certainly motivates the masses like few others: Most people are bad at statistics and like playing the lottery.
> But how does the quality of life of everybody below the median look like?
This discussion is about whether or not the US is a top brain drain destination. That means we're talking about exceptionally skilled or promising scientists/engineers/doctors at the top of their field. I'm not claiming life is great for everyone in America. I agree it isn't.
> But what does that buy you really, in a high cost of living area?
Look at the PPP in CA. It buys you a lot. People in HCOL cities that manage their finances well can become multi-millionaires in their early 30s. They will already be able to retire in 99% of the world, with enough savings to lead incredibly comfortable and luxurious lives. Meanwhile, people in Europe have on average lower assets and savings, low levels of home ownership, and lower likelihood to retire early at a comparable standard of living. Not to mention the pension crisis many of them are or will be facing in the near future.
Have you lived in Denmark, Japan, China, Netherlands and some other countries in the past 10-15 years? I really don’t think you weigh in people’s personal preferences and general quality of life into your equations.
There is a very big reason why there’s no more large swaths of immigrants from European and some Asian countries flocking into the US. Yes there is some, but the times when it was objectively much better to live and grow in the states is in the past. Money is really not the only thing people care about, but it’s hard to understand for people for whom money is the only thing they care about.
It's not just money. There's another aspect which the US has that edges out the competition. The fact that everyone is treated as an American in the place without racism/xenophobia. This is a huge benefit over impenetrable countries like Denmark/Netherlands.
Half of your country actively rallies on the idea of sending back Chinese, Indians, Mexicans and etc. I think you just live in a bubble or educated circles where that’s bot tolerated, but that’s not the experience of every single immigrant.
Immigrants from the Global South definitely have it worse in his examples, the Denmark and Netherlands, where even liberal parties have turned against immigration. The Republicans want to deport illegal immigrants, starting with those who have criminal records, they haven’t pursued anything as perverse as Denmark’s “Ghetto Law”. PVV’s platform in the Netherlands is self-explanatory.
I'm saying this as an immigrant, muslim name but white.
You've no idea of the blatant ongoing racism in Europe that's so normalised it's basically a non issue. Like Zwarte Piet in Netherlands.
The xenophobia in EU is head and shoulders above the US. I've lived in both places for years. In Germany for example you will never be considered German even if you were born in the country.
Money buys you freedom to live you life anywhere you want, and do whatever you want. Do you think grinding away for low pay until your mid 60s, only to end up facing a collapsing pension system in your final years with little savings is the best way to spend your time on earth?
Also how can a person that hasn't experienced the economic freedom the US provides to top talent accurately judge if their country of choice is better? I would like to see the statistics on SWEs that got wealthy in America, that regret moving to the states and would prefer to revert all those years.
(I've lived in Europe for most of my life by the way. Lots of good places to retire, but mostly poor choices for spending my productive years there.)
My friend, not everyone makes 150K/year. Like yes, we can do it, and choose our freedom because of the industry we live in. But you might be very disconnected from an average person’s life. Average or poorer person in the US does not get to choose where they retire either. I promise, 70 year retired old ojiisans over here in Tokyo don’t think their life would’ve been better if they lived in Oklahoma.
You guys think everyone wants the same as you do, but it really isn’t like that.
> Average or poorer person in the US does not get to choose where they retire either.
I've already addressed this in another thread. We're talking about brain drain. That means we are talking about highly skilled professionals for in-demand fields that can easily get this level of pay.
A skilled senior SWE can very realistically demand $300k+ comp in US tech companies. In the startup space $150k + equity has become table stakes, and their hiring bar is often significantly lower. These are not anomalies, tech companies employ hundreds of thousands of engineers.
> You guys think everyone wants the same as you do, but it really isn’t like that.
Ok, so tell me, what are things people want? Because people in the aforementioned circles can retire in their 30s, and spend the rest of their lives traveling the world, taking care of their family, and pursing their passions without worries. Is that somehow controversial?
For an entire career and beyond (to support themselves after retirement)? For their spouse and children too?
If you're single and flexible to move away if any part of that calculation changes it's definitely a great deal, but the more attachments you have in life, the worse the deal becomes arguably.
> Because people in the aforementioned circles can retire in their 30s, and spend the rest of their lives traveling the world, taking care of their family, and pursing their passions without worries. Is that somehow controversial?
I think you have an unrealistically rosy view of the average outcome here. I work in this field, and people "retiring in their 30s and traveling the world while providing for their family and pursuing their passions" is still an extreme outlier.
> Ok, so tell me, what are things people want?
Maybe I'm an outlier here too, but personally, I value long-term societal stability and safety quite highly, as I don't have many illusions about being able to buy my way out of certain kinds of problems caused by a large and increasing rift between people of various income levels.
> The majority of American households in the poorest US states are doing better than the majority of Europeans.
I'm so tired of this trope on HN. It comes up over and over again, but never considers non-economic quality of life issues. Take for example public schools: They are awful in poor US states, and good-to-excellent in most highly developed European nations.
I swear, none of these people have been to Louisiana, West Virginia, or any of the empty-looking cities that used to be lively 50 years ago. And I’m saying this as a person who isn’t American, but wanted to check it out myself before I made assumptions about the world.
Also GPD per capita is a terrible measure of standard of life considering 40k in Louisiana when you need to pay for most of health care education etc. isn't that much.
The average life in these countries is undeniably better, even if poorer on paper.
I agree. One other thing that the US has that no other country has is freedom of speech.
My entire family was killed in my previous country for daring to speak up against the leader at the time. It has something that has shook me to my core. Now, even to this day, I cannot find another country besides the US who not only respects freedom of speech, but encourages it among its residents. I will not move to another country no matter how drastic it gets.
Aren't colleges and universities liable for any "illegal" protest on their grounds, since yesterday? Whether a protest is illegal or not will be decided by the local politburo office.
I'm not convinced that that's sustainable though, and I think it might be an artifact of the dollar reserve currency status etc., because US firms cost more for a given yearly profit. Just look at Boeing vs Airbus.
I also lived better in Sweden when I was a PhD student, than I would have if I went to Washington and took an H1B job for 100k. I think the cutoff would be at somewhere soon above 120k. Maybe at 130k-140k, I would be able to live in Washington approximately as well as I could live in Sweden as a PhD student, but it would substantially more stressful. Maybe 130-140k isn't much to long-term Googlers, but I think this is closer to salaries that people actually pay for H1Bs than these 500k+ salaries.
The built environment in the US doesn't really correspond to the nominal prices, so in a way, America is only interesting economically if you're planning to go back home.
The way I see it, I can just start using AI once they get good enough for my type of work. Until then I'm continuing to learn instead of letting my brain atrophy.
reply