Hacker Newsnew | past | comments | ask | show | jobs | submit | brokencode's commentslogin

AI is bad at figuring out what to do, but fantastic at actually doing it.

I’ve totally transformed how I write code from writing it to myself to writing detailed instructions and having the AI do it.

It’s so much faster and less cognitively demanding. It frees me up to focus on the business logic or the next change I want to make. Or to go grab a coffee.


Username checks out.

Couldn't help your self, could you.

Lay off him, he did the right thing

> AI is bad at figuring out what to do, but fantastic at actually doing it.

I've found AI is pretty good at figuring out what to do, but hit or miss at actually doing it.


I think it is like protein folding.

It will make a mess but if you drop a console.log into the browser debug console to show the AI what it should be looking for after it spent 3 hours failing to help understand and debug the problem, it will do 1 week of work in 2 hours.


I've noticed this too. The latest cursor version has a @browser command which will launch a browser with playwright, and call tools to inspect the html and inject JavaScript to debug in real-time.

When it has access to the right tools, it does a decent job, especially for fixing CSS issues.

But when it can't see the artifacts it's debugging, it starts guessing, confident that it knows the root cause.

A recent example: I was building up a html element out of the DOM and exporting to PNG using html2canvas. The element was being rendered correctly in the DOM, but the exported image was incorrect, and it spent 2 hours spinning it's wheels and repeating the same fixes over and over.


I would say from my experience there's a high variability in AI's ability to actually write code unless you're just writing a lot of scripts and basic UI components.

The AI version of that Kent Beck mantra is probably "Make the change tedious but trivial (warning: this may be hard). Then make the AI do the tedious and trivial change."

AI's advantage is that it has infinite stamina, so if your can make your hard problem a marathon of easy problems it becomes doable.


I would say this does not work in any nontrivial way from what I've seen.

Even basic scripts and UI components are fucked up all the time.


You have to learn how and where to use it. If you give it bad instructions and inadequate context, it will do a bad job.

This is the ‘you’re holding it wrong’ of LLMs.

What tool can’t you hold wrong?

Literally every tool worth using in software engineering from the IDE to the debugger to the profiler takes practice and skill to use correctly.

Don’t confuse AI with AGI. Treat it like the tool it is.


You might want to ask ChatGPT what that is referencing. Specifically, Steve Jobs telling everyone it was their fault that Apple put the antenna right where people hold their phones and it was their fault they had bad reception.

The issue is really that LLMs are impossible to deterministically control, and no one has any real advice on how to deterministically get what you want from them.


I recognized the reference. I just don’t think it applies here.

The iPhone antenna issue was a design flaw. It’s not reasonable to tell people to hold a phone in a certain way. Most phones are built without a similar flaw.

LLMs are of course nondeterministic. That doesn’t mean they can’t be useful tools. And there isn’t a clear solution similar to how there was a clear solution to the iPhone problem.


Humans are not deterministic.

Ironically AI models are iirc.

Come up with a real argument.


"All the time"?

This always feels like you're just holding it wrong and blaming the tool.


It can still fuck it up. And you need to actually read the code. But still a time saver for certain trivial tasks. Like if I'm going to scrape a web page as a cron job I can pretty much just tell it here's the URL, here's the XPath for the elements I want, and it'll take it from there. Read over the few dozen lines of code, run it and we're done in a few minutes.

> AI is bad at figuring out what to do, but fantastic at actually doing it.

AI is so smart, one day might even figure out how to subtract... https://news.ycombinator.com/item?id=45821635


When you need to take the square root of 37282613, do you do it in your head or pull out the calculator?

Why does the AI have to be good at math when it can just use a calculator? AI tool usage is getting better all the time.


I think people generally think AI should be good at math because it runs on a very complex and very fast calculator to begin with.

Yore brain runs on physics and biology, yet here we are…

That is not the point...Its about not understanding subtraction...

We're all architects now...

Hasn’t he been incorrectly predicting massive crashes every few years ever since he was right that one time?

How many bad predictions does he need to make before people stop caring what he has to say?


He's predicted 20 of the last 2 recessions, so there's that.

Has he or are you just repeating a funny joke?

He has. He’s notably been burned a few times, one of the worst of which was 2021 TSLA. Iirc he was early but not wrong, with the 2022 collapse of all things ARKK related crashing and burning.

He was in early for the 2008 recession as well, but that is generally better than being in late (as most of us were).

He's been bearish for the last n years. His Twitter handle is Cassandra because he tries to warn people about impending doom, but nobody listens. He gave up at one point because he tweeted SELL and then everything was fine.

tl;dr he's a perma bear.

I actually don't think he's wrong, but one thing I've learned is that it's not enough to recognize a bubble. Almost everyone sees the markets are, as they say, frothy. But you need to see if there's a needle nearby. Without that you're just trying to get lucky.

Puts especially are really hard because they expire. They limit your loss compared to shorts, but you need to time it perfectly.


Both.

Well his 10 year performance is 255%. Which does not include his bet in 2005-08. Buffett has only done 154% in that time. So rumours of his demise are greatly exaggerated.

That doesn’t change the fact that he keeps making these doomsday predictions that don’t come true.

Also, his returns aren’t really impressive compared to VTI, which has had an annualized return of 14.04% over the past 10 years according to Vanguard. That works out to 372% total returns in that period.

It’s easy to make money when even the broadest US index fund is up by that much.


That could be part of his strategy no?

"He was born on third base, and is proud he's made it to second."

A joke said of Trump, of whom it is much more fair, but still...


Yep. A lot of these guys who have made a profit during 2008 are still chasing that dragon.

Ask Nostradamus. Big Short has a nice ring to it in future history books.

Yeah, except you can keep on squeezing these lemons for a long time before they run out of juice.

Even if the model training part becomes less worthwhile, you can still use the data centers for serving API calls from customers.

The models are already useful for many applications, and they are being integrated into more business and consumer products every day.

Adoption is what will turn the flywheel into a rocket.


Well, the thing is that that kind of hardware chips quickly decrease in value. It's not like the billions spend in past bubbles like the 2000s where internet infrastructure was build (copper, fibre) or even during 1950s where transport infrastructure (roads) were build.

Data centers are massive infrastructural investments similar to roads and rails. They are not just a bunch of chips duct taped together, but large buildings with huge power and networking requirements.

Power companies are even constructing or recommissioning power plants specifically to meet the needs of these data centers.

All of these investments have significant benefits over a long period of time. You can keep on upgrading GPUs as needed once you have the data center built.

They are clearly quite profitable as well, even if the chips inside are quickly depreciating assets. AWS and Azure make massive profits for Amazon and Microsoft.


Sure, but most of the country’s AI talent is concentrated there. Not to mention venture capital.

What’s the advantage of moving? Maybe lower taxes and a cheaper rent.

That seems like a small price to pay compared to the hundreds of billions they’re putting into data centers.


They're concentrated there because they've been asked to concentrate there. That can change on a dime.

It's not like data centers are mainly in SF.


You've got it backwards.

Well paid engineers congregate in California because it's a nice place to live if you can afford it.

Therefore if you want to hire the best engineers, and want an in-office work culture, you need to go to California.


Well paid engineers congregate in CA because that's where the companies that hire well paid engineers congregate, and they (mostly) want those well paid engineers to come to the office every couple of days. I don't know how you could get the causality so completely backwards on this.

Depends on the type of engineer - NYC for fintech is also a top spot, Boston for robotics, etc...

Anecdotal evidence: I moved to CA twice as an engineer.

Once to get my masters after college. Stayed for 13 years. Left during COVID.

Second time to raise kids.

Our reasons include weather, intellectual atmosphere, safety (in many regards), schools, and job opportunities.

The geo area sandwiched between Berkeley and Stanford is only rivaled by Boston. You think Stanford and Berkeley are in the Bay because they’re told to?

And I would also question: what’s the point of living in US if you’re not in California? Once you decide to not live in CA, a bunch of other countries rank better than other US states. Such as Canada, Australia, New Zealand.

If I were to not live in CA, even the imperial units would quickly become annoying.


How many other places, inside and outside the U.S., have you lived?

I think you're wrong. The concentration is for a host of reasons. Witness the large number of cities and countries that have tried to create a local Silicon Valley competitor unsuccessfully over the last 25 years.

The data centers I think prove this point, and disprove yours -- huge spend has gone into data centers, but places like Wenatchee remain stubbornly not Silicon Valley.

Intel has not made Portland into SV. Austin, while a tech hub and one of the US supply chain centers for hardware, is multiple orders of magnitude less productive than SV for tech startups. Productive as in numbers of unicorns, total value creation, however you want to spin it.


> That can change on a dime.

People tried very hard to change it between 2020-2023 and utterly failed.


Cries in the number of "next Silicon Valley" in the last 30 years.

Silicon Alley, Beach, Hills, Slopes, Forest, Prairie, Bayou, Desert, Roundabout, Docks, Glen, Fen, Cape, Oasis... and that's not the complete list!

This is what I want to do when I retire. Maybe not OnlyFans fixes specifically, but just go around fixing random stuff.

Like if Batman turned out to be bad at fighting criminals so had to fight null pointer exceptions instead.


Maybe hack into facilities, optimize their scripts and deployment, then leave without a trace confusing the IT department.

bugman

"Fear not the bugs citizen! For in my utility belt, I have REGEX and VIM!"

I'd actually prefer to communicate to ChatGPT via Microsoft Paint. Much more efficient than typing.


Leading scientists claim interpretative dance is the AI breakthrough the world has been waiting for!


In all seriousness, I found those sorting dance videos to be really educationally effective (when coupled with going over the pseudocode) - e.g. https://youtu.be/3San3uKKHgg?si=09EQYJNIkRqvQgWG


I think most people who buy Steam Decks don’t care whatsoever about Linux and would be perfectly fine with not having control over it as long as all their games worked.


I think Steam Decks wouldn't ever have existed without Linux enthusiasts as early adopters of Steam Decks and the few previous iterations of Steam + Linux either playing games on their own machines or on the previous iteration of a Steam Linux computer. If at any point it was all tied up with DRM and that complete loss of control required for anti-cheat it would have just died and not be seen again.

The only way it changes course is an enormous rug pull that removes most of the differentiation between PC and Console gaming and you end up with Steam as a dying product unable to compete with either other modes of PC gaming or the dominant console players. (Sadly that's basically what I expect when gaben retires)


The differentiation of the Steam Deck is the game ecosystem, ability to play your existing PC game library on the go, and low game costs compared to consoles during the frequent sales.

I don't think Linux is a differentiator for the Steam Deck. It's obviously essential as a technical foundation though, similar to how it’s essential to Android phones.

But locking it down with DRM won't affect gamer interest in the platform as long as the games are still cheap, plentiful, and run well.


I can imagine a world where you still have full control most of the time, but when you open a multi player game the system reboots on a clean / verified OS image. Then when you quit it can reboot in to the OS with all of your mods and customisation on.


Targeting ads better. Better sentiment analysis. Custom ads written for each user based on their preferences. Features for their AR glasses. Probably try to take a piece of the Google search pie. Use this AI search to serve ads.

Ads are their product mostly, though they are also trying to get into consumer hardware.


It’s still a ways off, but I’m excited for the possibility of something like Tauri using Servo natively instead of needing host browsers. A pure Rust desktop app stack with only a single browser to target sounds fantastic.


But then we have the same complaint against Electron, namely large deployment sizes and no shared memory, no?


this part is important: > A pure Rust desktop app stack

I think the parent is imagining a desktop with servo available as a standard lib, in which case you're left with the same complaints as Tauri, not electron; that the system version of Servo might be out of date.


Yeah, multiple Tauri apps could theoretically share a Servo library.

Though I’d also be interested to see how slim it could be with static linking.

Presumably a lot of code could be compiled out with dead code analysis? Or compile flags could remove old compatibility cruft and unneeded features?


For rust desktop apps, why target a web engine, when we have much more lightweight native GUI frameworks? We don't need yet another bloated Electron.


One nice thing about targeting a web engine is that your application could potentially run in browsers too. Lots of Electron applications do this.

Also you get to take advantage of the massive amount of effort that goes into web engines for accessibility, cross-platform consistency, and performance.

Electron is a memory hog, but actually very powerful and productive. There’s a reason why VSCode, Discord, and many others use it.

But yeah, I wouldn’t say no to a native Rust desktop stack similar to Qt. I know there are options at various levels of sophistication that I’m curious about but haven’t explored in depth.


If it runs in a web browser, why bother Electron if you can just install a standalone web app in Chromium-based browsers (or Firefox with PWA extension)? I do this with Slack, Teams, Discord, Gmail and they use less RAM since they reuse a shared web engine.


Some applications benefit from the host integration. VSCode in particular, since it interacts with the terminal and other applications. I'm also assuming 1Password benefits from it as well for full OS integration.


But then they don't need to be made as Electron apps, but rather native apps, which use a fraction of resources. Compare e.g. Sublime Text or Notepad++ with VS Code.


But then they wouldn’t work in the browser.

There’s certainly a place for truly native apps, but there are also a lot of reasons companies keep picking Electron. Efficiency is just one consideration.


They could, using Wasm, like Qt, Blazor Hybrid, Uno Platform, Avalonia.FuncUI. Electron is efficient for devs, but inefficient for users, being a memory hog, especially on low end devices.


They could do that today, but do they? I can’t name one app that uses one of those to run in a browser. I can name multiple highly successful apps that use Electron.

I seriously doubt the approach of running a native desktop application in the browser would give you performance or usability as good as running an actual web app.


Plenty of people must find this worthwhile, otherwise sellers wouldn’t be able to find buyers and they’d be forced to reduce price if they want to sell.

Plus, the listed price is often aspirational. Savvy buyers will typically negotiate the price down.

I have to admit that I’m not a savvy buyer and always buy new though. But I know people who do this very successfully.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: