Hacker Newsnew | past | comments | ask | show | jobs | submit | zparky's commentslogin

It's been blowing my mind reading HN the past year or so and seeing so many comments from programmers that are excited to not have to write code. It's depressing.


There are three takes that I think are not depressing:

* Being excited to be able to write the pieces of code they want, and not others. When you sit down to write code, you do not do everything from scratch, you lean on libraries, compilers, etc. Take the most annoying boilerplate bit of code you have to write now - would you be happy if a new language/framework popped up that eliminated it?

* Being excited to be able to solve more problems because the code is at times a means to an end. I don't find writing CSS particularly fun but I threw together a tool for making checklists for my kids in very little time using llms and it handled all of the css for printing vs on the screen. I'm interested in solving an optimisation issue with testing right now, but not that interested in writing code to analyse test case perf changes so the latter I got written for me in very little time and it's great. It wasn't really a choice of me or machine, I do not really have the time to focus on those tasks.

* Being excited that others can get the outcomes I've been able to get for at least some problems, without having to learn how to code.

As is tradition, to torture a car analogy, I could be excited for a car that autonomously drives me to the shops despite loving racing rally cars.


Those are all good outcomes, up to a point. But if this stuff works TOO well, most or maybe all of us will have to start looking at other career options. Whatever autonomy you think you have in deciding what the AI does, that can ultimately be trained as well, and it will be the more people use it.

I personally don't like it when others who don't know how to code are able to get results using AI. I spent many years of my life and a small fortune learning scarce skills that everyone swore would be the last to ever be automated. Now, in a cruel twist of fate, those skills are being automated and there is seemingly no worthwhile job that can't be automated given enough investment. I am hopeful because the AI still has a long way to go, but even with the improvements it currently has, it might ultimately destroy the tech industry. I'm hoping that Say's Law proves true in this case, but even before the AI I was skeptical that we would find work for all the people trying to get into the software industry.


I get the feeling, but I can't help but also say it sounds like how I imagine professional portrait artists felt about the photograph. Or scribes about audio recordings. Or any other occupation that similarly got more or less replaced by a technological advance.

Those jobs still exist, but by large are either very niche or work using that tech in some way.

It is not wrong to feel down about the risk of so much time, training, etc rapidly losing value. But it also isn't wrong that change isn't bad, and sometimes that includes adjusting how we use our skills and/or developing new ones. Nobody gets to be elite forever, they will be replaced and become common or unneeded eventually. So it's probably more helpful for yourself and those that may want to rely on you to be forward-thinking rather than complaining. Doesn't mean you have to become pro-AI, but may be helpful to be pragmatic and work where it can't.

As to work supply... I figure that will always be a problem as long as money is the main point of work. If people could just work where they specialize without so much concern for issues like not starving, maybe it would be a different. I dunno.


> I personally don't like it when others who don't know how to code are able to get results using AI.

Sounds like for many programmers AI is the new Visual Basic 6 :-P


It's worse than that lol. At least with VB 6 and similar scripting languages, there is still code getting written. Now we have complete morons who think they're software developers because they got some AI to shit out an app for them. This is going to affect how people view the profession of software engineering all around.


Except in this case you won't be able to afford going to the shops anymore. Or even if the shops will still be around. What use is an autonomous car if you can't use it.


I suspect, rather strongly, that what really specifically wears programmers down is boilerplate.

AI is addressing that problem extremely well, but by putting up with it rather than actually solving it.

I don't want the boilerplate to be necessary in the first place.


Or, for me, yak shaving. I start a project with enthusiasm and then 8 hours later I'm debugging an nginx config file or something rather than working on the core project. AI gets a lot of that out of the way if you let it, and you can at least let it grind on that stuff while you think about other things.


For me, the yak shaving is the part where I get the next project idea...


It is fun. It takes some skill to organize a pipeline to generate code that would be tedious to write and maintain otherwise. You are still writing stuff to instruct the computer, but now you have something taking natural language instructions and generating code and code test assets.

There might have been people who were happy to write assembly that got bummed about compilers. This AI stuff judt feels like a new way to write code.


I've heard this take a few times, but I'm not convinced using general language is the new way to write code (beyond small projects).

Inevitably AI will writes things in ways you don't intend. So now you have to prompt it to change and hope it gets it right. Oh, it didn't. Prompt it again and maybe this time will work. Will it get it right this time? And so on.

It's so good at a lot of things, but writing out whole features or apps in my experience seems good at first, but then it turns out to be a time sync of praying it will figure it out on this next prompt.

Maybe it's a skill issue for me, but I've gotten the most efficiency out of having it review code, pair with it on ideas and problems, etc. rather than actually writing the majority of code.


Until you've actually done it yourself, it will probably sound like vapor ware. The only question is how much energy are you willing to spend, in terms of actual energy (because you are making more calls to the AI) and yes, setting up your development pipeline with N LLM calls.

It is really like micro-managing a very junior very forgetful dev but they can read really fast (and they mostly remember what they read for a few minutes at least, they actually know more about something than you do if they have a manual about it on hand). Of course, if its just writing the code once, you don't bother with the junior dev and write the code yourself. But if you want long term efficiency, you put the time into your team (and team here is the AI).


I think that the main missunderstanding is that we used to think programming=coding, but this is not the case. LLMs allow people to use natural language as a programming language, but you still need to program. As with every programing language, it requires you to learn how to use it.

Not everyone needs to be excited about LLMs, in the same way that C++ developers dont need to be excited about python.


Do you really think the creative or intellectual element of programming is the tapping of keys? I don't understand this at all. I enjoy solving problems and creating elegant solutions. I'm spending less time tapping keys and more time engineering solutions. If tapping keys is the most fun part for you, then that's fine! But let's not pretend THAT is the critical part of software engineering. Not to mention, it's not all or nothing. The options aren't writing code or not writing code. You can selectively not write any boring code and write 100% of the bits you find interesting or care about. If an LLM is failing to deliver what is in my minds eye then I simply step in and make sure the code is quality... I'm doing more and better software engineering, that's why I'm happy, that's the bit that scratches my itch.


I hate writing code, but love debugging. LLMs have been a godsend for banging out boilerplate and getting things 95% of the way there. Now I spend most of my time on the hard stuff (debugging, refactoring), while building things that would have taken weeks in days. It’s honestly made the act of building software more enjoyable and rewarding.


Some carpenters like to make cabinets. Some just like to hammer nails.


Perhaps consider that I still think coding by prompting is just another layer of abstraction on top of coding.

I'm my mind, writing the prompt that generates the code is somewhat analogous to writing the code that generates the assembly. (Albeit, more stochastically, the way psychology research might be analogous to biochemistry research).

Different experts are still required at different layers of abstraction, though. I don't find it depressing when people show preference for working at different levels of complexity / tooling, nor excitement about the emergence of new tools that can enable your creativity to build, automate, and research. I think scorn in any direction is vapid.


One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that.


It completely depends on the way you prompt the model. Nothing prevents you from telling it exactly what you want, to the level of specifying the files and lines to focus on. In my experience anything other than that is a recepy for failure in sufficiently complex projects.


Several comments can be made here: (1) You only control what the LMM generates to the extent that you specify precisely what it should generate. You cannot reasons about what it will generate for what you don't specify. (2) Even for what you specify precisely, you don't actually have full control, because the LLM is not reliable in a way you can reason about. (3) The more you (have to) specify precisely what it should generate, the less benefit using the LLM has. After all, regular coding is just specifying everything precisely.

The upshot is, you have to review everything the LLM generates, because you can't predict the qualities or failures of its output. (You cannot reason in advance about what qualities and failures it definitely will or will not exhibit.) This is different from, say, using a compiler, whose output you generally don't have to review, and whose input-to-output relation you can reason about with precision.

Note: I'm not saying that using an LLM for coding is not workable. I'm saying that it lacks what people generally like about regular coding, namely the ability to reason with absolute precision about the relation between the input and the behavior of the output.


You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself!


>> One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that.

> You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself!

You missed the key point. You can't predict and LLM's "outcome with high precision."

Looking at the output and evaluating it after the fact (like you describe) is an entirely different thing.


For many things you can though. If I ask an LLM to create an alert in terraform that triggers when 10% of requests fail over a 5 minute period and sends an email to some address, with the html on the email looking a certain way, it will do exactly the same as if I looked at the documentation, and figured out all of the fields 1 by 1. It’s just how it works when there’s one obvious way to do things. I know software devs love to romanticize about our jobs but I don’t know a single dev who writes 90% meaningful code. There’s always boilerplate. There’s always fussing with syntax you’re not quite familiar with. And I’m happy to have an AI do it


I think you're still missing the point. This cousin comment does a decent job of explaining it: https://news.ycombinator.com/item?id=46231510


I don’t think I am. To me, it doesn’t have to be precise. The code is precise and I am precise. If it gets me what I want most of the time, I’m ok with having to catch it.


Benchmarks are super impressive, as usual. Interesting to note in table 3 of the paper (p. 15), DS-Speciale is 1st or 2nd in accuracy in all tests, but has much higher token output (50% more, or 3.5x vs gemini 3 in the codeforces test!).


The higher token output is not by accident. Certain kinds of logical reasoning problems are solved by longer thinking output. Thinking chain output is usually kept to a reasonable length to limit latency and cost, but if pure benchmark performance is the goal you can crank that up to the max until the point of diminishing returns. DeepSeek being 30x cheaper than Gemini means there’s little downside to max out the thinking time. It’s been shown that you can further scale this by running many solution attempts in parallel with max thinking then using a model to choose a final answer, so increasing reasoning performance by increasing inference compute has a pretty high ceiling.


Honestly, I use revanced on my android phone which lets me disable all shorts content appearing. and on browser if i stick to the subscriptions tab and maybe the sidebar on videos, there's no shorts.


before reddit banned a ton of subreddits for no moderation, I believe r/assistedsuicide was the place for discussion like this.


Yep, DDR5 prices have nearly doubled in less than 2 months. https://pcpartpicker.com/trends/price/memory/#ram.ddr5.5200....


Are those graphs specifically for the US? When I change the country in the top right, it doesn't seem like the graphs are changing, and considering they're in USD, I'm assuming it's US-only?

Is the same doubling happening world-wide or is this US-specific, I guess is my question?

Edit: one data point, I last bought 128GB of RAM in March 2024 for ~€536, similar ones right now costs ~€500, but maybe the time range is too long.


They are US-specific, yes. Thanks for asking that - I'll look into updating those graphs to show for the appropriate region/country depending on what country you've selected (on the top right of the page).


It just means RAMs aren't sold in volume in your area, if you're not feeling it...

[1]: https://kakaku.com/item/K0001448114/pricehistory/ (archive: https://archive.is/CHLs2)


I'm not finding any way of figuring out if that's true or not, I live near the second-largest city in Spain, kind of feel like people probably buy as much RAM here as elsewhere in the country/continent, but maybe that's incorrect. I've tried searching for graphs/statistics for the last 1-2 years about it in Spain but not having much success.


I can add Spain price trends to PCPartPicker. Quick question though - do you want the price trends to cover just Spanish retailers, or should it trend the prices across all of the EU?


That would be incredible! Personally I only buy the stuff I can find inside the country, inside the country. But then some stuff I have to order from Germany/France/Italy when it's only available outside our borders.

So I don't know the right approach here, I can see value for both price trends for multiple reasons, unfortunately :) Wish I could give a simpler answer!


Ok that should be in - if you view the price trend pages now there are different currency grouping options (with EUR being one of them). Hope this helps!


Not parent, but logically the EU is a single market, so EU-wide prices are better, IMO.


In the UK I was looking at DDR4-3200 SODIMM last week for some mini-pcs... and decided to pass after looking at the price graphs. It's spiked in the last few weeks.


What graph you used for UK-specific prices as it seems the earlier graphs referenced here are US-only?


camelcamelcamel is the best for amazon items - choose a stick, look at the graph.

There is a bit of annoyance as items come in and out of stock (ie. out of stock often means inaccurate price); so its often better to find a product on amazon and look here.

8GB SODIMM stick https://uk.camelcamelcamel.com/product/B0C7Z4HJ8L?context=se...

regular 2x16GB pair https://uk.camelcamelcamel.com/product/B07RW6Z692?context=se...

I have a script watching on some items on overclockers and crucial, including RAM. So for those by graph I really meant "eyeballed an email search".


Maybe it’s time to sell my unused DDR4s! I was thinking it’d be not worth anything at this point


536 € seems expensive for March 2024, but either way, the price dropped a lot over the last one and a half years, only to surge in the last two months.


> the price dropped a lot over the last one and a half years, only to surge in the last two months.

Yeah, that was my hunch, that something like that was going on. Thanks for clarifying.


I was able to get a bundle deal from Microcenter here in SoCal with the Ryzen 9950x, motherboard and 32GB of RAM for $699. They have since removed the RAM from all the bundles.


While thats a sweet upgrade for people with an older desktop that can support a motherboard swap, its worthwhile to point out the ram is probably insufficient.

RAM usage for a lot of workloads scales with core/thread count, and my general rule of thumb is that 1G/thread is not enough, and 2G/thread will mostly work, and 4G/thread is probably too much, but your disk cache will be happy. Also, the same applies to VMs, so if your hosting a VM and give it 16 threads, you probably want at least 16G for the VM. The 4G/thread then starts to look pretty reasonable.

Just building a lot of opensource projects with `make -j32` your going to be swapping if you only have 1G/thread. This rule then becomes super noticeable when your on a machine with 512G of ram, and 300+ threads, because your builds will OOM.


Ha, I was going to purchase a 96GB kit, but that's when I first noticed that RAM prices were getting crazy.


Yeah it's so annoying. Hitting back and stopping loading (or just hitting back) usually gets me to the page.


yeah its pretty funny, i wonder if they prompted the llm to put as many emojis in as possible:

<edit> forgot hn doesnt show emojis, so ill just link to the paragraph: https://github.com/st-tech/ppf-contact-solver?tab=readme-ov-...

8 emojis in 2 sentences, lol


rough math: 1000 kwh / mo / house, ~30kwh/kg hydrogen so 30kg H per mo per house. idk how long winters would be, 8 months is 240kg of hydrogen, which if compressed to 10 bar is roughly 300 cubic meters of storage. kinda a lot of space. compressed to 100 bar is like 10kg/m3 which sounds more manageable


Round trip efficiency on hydrogen is horrible. Local hydrogen production could make sense because importing fuel into remote off grid communities is extremely expensive.

Rather than building 10x as much solar in the north + battery systems + winter hydrogen storage etc long distance HVDC to cities and the surrounding grid just makes so much more sense. Even better because the state is huge and the population is tiny they can go nearly 100% hydro.

Where batteries could be useful is operating those long distance power lines at nearly 100% 24/7 then load shifting via batteries to match local demand.


> Round trip efficiency on hydrogen is horrible.

For seasonal storage, round trip efficiency is mostly irrelevant; the relevant metric is capex per unit of stored energy.


That could be generally true, but it’s not true in this specific instance.

A panel in Alaska only collects so much sunlight over the summer before considering efficiency losses from Hydrogen. It would require buying panels that effectively get ~1 month of use over the entire year due to efficiency losses + limited gathering period, and solar isn’t that cheap.

So in Alaska you’re just better off only using panels directly in the summer which at least provide several months of electricity per year. In say Texas on the other hand you get energy from a panel year round so a marginal panel purchased to generate hydrogen at say 20% round trip efficiency gets 30% * 9 months + say 70% of average production for the 3 winter months = 4.8 months of winter electricity per year. Of course you also need to pay for the hydrogen generating machine and the hydrogen burning device, but that’s not necessarily problematic.


If the alternative were, say, diesel, I think a RTE of 40% (which you might get with hydrogen) would be fine in this case, if the capex of the storage system is low enough.

It is certainly the case that hydrogen would be better than batteries for this storage use case in Alaska.


If the baseline is ~2c/kWh solar in a good location and they are forced to buy 10x as much solar panels to cover winter use they are now spending ~20c/kWh on solar panels but on top of that they also need to pay for hydrogen generating equipment which only gets used for a few months a year, hydrogen storage equipment, and hydrogen specific generators plus presumably a backup diesel or gas based generator + storage system.

In 20 years it might make sense but today green hydrogen is several times more expensive than gas even when you can use cheaper electricity, can make use of the equipment year round, and have the benefit of larger economies of scale. Even if the goal is completely about climate change locating that same equipment in the lower 48 states is just a much better idea.


yeah they introduced vanguard anticheat on all riot games which isn't supported on linux.


Woah so Riot broke League on Linux? I guess they probably did the math but that seems like a bold move.


yeah, and mac too, can't run league or valorant. vanguard is their kernel-level anticheat, and windows is like 95% of their market and the difficulty of implementing it on another kernel i guess isn't worth the <5%.


League works on macOS just fine, I played yesterday. Vanguard is buggy (it occasionally quits the client after I finish a game), but the game generally works and has for at least several years.


Why would it be a "bold move"? Linux gaming population is damn near zero, they do not provide a higher profit margin like mac gamers would, and the documented evidence is that supporting Linux users is obnoxious because they are rude and entitled but not actually that much better at providing feedback.

Epic Games bought out rocket league and turned off a native linux build and faced no repercussions. Instead they made plenty of money.

That's the bar.


Not sure that's fair, given most Linux gamers look like Windows gamers to the metrics.

That said, devices like the SteamDeck run games on Linux (and that's without considering that every Android game ever is technically running on Linux too).

Let's face it though, PC gaming is already small enough these vs the consoles, that further splitting the marked isn't going to make sense for a lot of companies.


>Not sure that's fair, given most Linux gamers look like Windows gamers to the metrics.

No. All the articles and testimony of game devs abandoning native Linux versions is from well before Proton was a thing, including Epic Games buying Rocket League and preventing you from playing the Native Linux build they had.

It also was not related to anti-cheat or underlying engine limitations or anything. Developers were clear that the problem was the massive lack of uptake mixed with a weirdly entitled community.


Personally I don't think gamers are entitled. Ultimately games are anywhere between 60 - 120 dollars and often barely work on their target platforms. With kernel level anticheat, you're literally being asked to pay them to rootkit your computer with software you cannot audit.

The last 10 years of AAA gaming have been an absolute shit show. The only people who seem to be even trying are Nintendo. Everyone else releases stuff that's buggy as hell and about as fun as a dental cleaning.


It's bold because it's breaking stuff that already works and will continue to work even if you do nothing.

It's one thing to choose not to develop a new game for Linux. It's another to take a game that already runs on Linux and intentionally break it. You're guaranteed to alienate SOME people who are already fans of the game.


Important to note that article was written 9 years ago and NMS has received numerous content updates since. There's a lot more to the game now.


There is, but the procedural generation part is not what makes it fun to me. It's what you create and how you choose to "live" in the game. It really is like the real universe - isotropic, the same in all directions - it only takes a few hours to be overwhelmed by how pointless it all seems, knowing there's an infinity of anything you discover elsewhere.

Once you build a base or create some goal for yourself, it becomes interesting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: