I decided to see if I could get an old Perl and C codebase running via WebAasembly in the browser having Claude brute-force figuring out how to compile the various components to WASM. Details
here: https://simonwillison.net/2025/Oct/22/sloccount-in-webassemb...
I'm not saying it could have created your exact example (I doubt that it could) but you may be under-estimating how promising it's getting for problems of that shape.
I do not doubt that LLMs might some day be able to generate something like my work in stagex, but it would only be because someone trained one on my work and that of other people that insist on solving new problems by hand.
Even then, I would never use it, because it would be some proprietary model I have to pay some corpo for either with my privacy, my money, or both... and they could take it away at any time. I do not believe in or use centralized corpotech. Centralized power is always abused eventually. Also is that regurgitated code under an incompatible license? Who knows.
Also, again, I would rob myself of the experience and neural pathway growth and rote memory that come from doing things myself. I need to lift my own weights to build physical strength just as I need to solve my own puzzles to build patience and memory for obscure details that make me better at auditing the code of others and spotting security bugs other humans and machines miss.
I know when I can get away with LTO, and when I cannot, without causing issues with determinism, and how to track down over linking and under linking. Experience like that you only get by experimenting and compiling shit hundreds of times, and that is why stagex is the first Linux distro to ever hit 100% determinism.
Circling back, no, I am not worried about being unemployable because I do not use LLMs.
And hey, if I am totally wrong and LLMs can create perfectly secure projects better than I can in the future, and spot security bugs better than I can, and I am unemployable, then I will go be an artist or something, because there are always people out there that appreciate hard work done by humans by hand, because that is how I am wired.
> Even then, I would never use it, because it would be some proprietary model I have to pay some corpo for either with my privacy, my money, or both... and they could take it away at any time.
Have you been following the developments in open source / open weight models you can run on your own hardware?
They're getting pretty good now, especially the ones coming out of China. The GLM, Qwen and DeepSeek models out of China are all excellent. Mistral's open weight models (from France) are good too, as are the OpenAI gpt-oss models.
No privacy or money cost involved in running those.
I get your concern about learning more if you do everything yourself. All I can say there is that the rate and depth of technical topics I'm learning has been expanded by my LLM usage because I'm able to take on a much wider range of technical projects, all of which teach me new things.
You're not alone in this - there are many experienced developers who are choosing not to engage with this new family of technology. I've been thinking of it similar to veganism - there are plenty of rational reasons to embrace a vegan lifestyle and I respect people who do it but I've made different choices myself.
Not only have I been following a lot of the open models, you may find it surprising I have extensively tested some of them and coerced them to generate deterministic responses across different machines as a method to prove responses are not tampered with, as well as developing ways to run them in remotely attestable secure enclaves to ensure people that use them for sensitive applications can have provable privacy with end to end encryption.
I will admit that I find deploying and hacking on the tech itself super interesting. Hell I founded a machine learning company and got a paper published with the AAAI for my cheap bulk training data acquisition techniques back in 2012 before most cared about this stuff.
I even think there are a ton of great and exciting use cases for this tech. Like identifying cancer in large photographic datasets, etc. I have a lot of hope about medical applications in particular.
All that said, I just don't think LLMs are remotely competitive or useful at the type of threat modeling, security engineering, and auditing work I do on average. They are the wrong tool for my job, which require a level of actual reasoning that LLMs are nowhere near capable of right now, or are likely to be any time soon. Maybe they could help with a script here and there which might save me a few hours a month, but for 95%+ of it, they would just waste my time regurgitating the same industry standard bad advice and approaches that I am trying to change while making me duller at writing code by hand when I need to.
As contrast though, I would not fire someone for using LLMs for learning or inspiration as long as they consistently prove they fully understand and can explain every line of every PR they submit, can pair program or usefully contribute to engineering discussions without LLMs, and maintain a competitive level of quality with the rest of the team. Not everyone has to make the same tool choices I do, as long as they can hold their own in a team with me and are not dumb enough to regurgitate AI slop they don't understand.
It is amusing you use vegans as an example. I am not a vegan, but I often describe myself as something of a digital vegan that is very very selective about what tools I use and what I expect from them such as why I also don't use a smartphone or GPS.
You're criticizing me for directly crediting the original here. That's the correct and ethical thing to do!
Honestly, I've seen the occasional bad faith argument from people with a passionate dislike of AI tooling but this one is pretty extreme even by those standards.
I hope you don't ever use open source libraries in your own work.
Actually, my criticism was the result of my own misunderstanding of what you were claiming. My apologies for that, although I'm still unlikely to use these tools based upon the example when my own personal counterexamples have shown me that it's often as much or more work to get there via prompting than it is to simply do the thinking myself. Have a good day.
Originally I tried to get it working loading code directly but as far as I can tell there's no stable CDN build of that, so I had to vendor it instead.
I decided to see if I could get an old Perl and C codebase running via WebAasembly in the browser having Claude brute-force figuring out how to compile the various components to WASM. Details here: https://simonwillison.net/2025/Oct/22/sloccount-in-webassemb...
Here are notes it wrote for me on the compilation process it figured out: https://github.com/simonw/tools/blob/473e89edfebc27781b43443...
I'm not saying it could have created your exact example (I doubt that it could) but you may be under-estimating how promising it's getting for problems of that shape.