Hacker Newsnew | past | comments | ask | show | jobs | submit | more logicprog's commentslogin

Whether or not he's right, Zitron just keeps repeating the same points over and over again at greater and greater length. This newsletter us 18,500 words long (with no sections or organization), and none of it is new.


As someone who generally agrees with the thesis, I still find the length of the article quite frustrating, since it's definitely quantity over quality for text here.

The core issue is that OpenAI is committing to spending hundreds of billions on AI data center expansion that it doesn't have and that it doesn't appear able to acquire, and this basic fact is being obscured by circular money flows and the finances of AI being extremely murky [1]. But Zitron is muddying this message by excessive details in trying to provide receipts, and burying all of it behind what seems to be a more general "AI doesn't work" argument that he seems to want to make but isn't sufficiently well-equipped to make.

[1] The fact that the Oracle and Nvidia deals with OpenAI may actually be the same thing is the one thing new to me in this article.


Agreed on all fronts.


He should use AI for that


It's all so regurgitated and unoriginal, even between him and other anti-AI critics, that it truly feels like he does. Not that AI hypers are better — but those with more nuanced middle of the road complex views are the ones that are (such as Simon Willams' work on The Lethal Trifecta, the "AI Coding Trap" article recently, etc), and I think that's interesting. I also feel like he really cherry picks his statistics (both model performance and economics wise) as much as his enemies do, so it can be exhausting to read.


But at least there are _some_ critics that try to apply critical thought against the hype machine and all the stochastic bullshit we deal with every day.


Agreed. And I mean, I think the nuanced investigations of what AI is and is not good for or capable of, and how it might or might not be made sustainable going forward both economically and environmentally, are a much more meaningful, interesting, and worthwhile check on the hype than dogmatic rejection, because just as hype won't convince anyone sane, neither will dogmatic rejection built on a biased accounting of what's going on with no vision of the possible futures available to us. No one will be swayed by that that isn't convinced already, and many will be repelled. Especially given how wrong or incomplete their accounts often are (like Gary Marcus talking about how AI can't run searches to retrieve information lol).

To be clear, I started out as a fan of Gary Marcus and Ed Zitron, and a rabid anti-AI hater, because I tried GPT 3.5 soon after it was released and was extremely unimpressed with any of its capabilities. But after a while, I started to get uncomfortable with my closed mindedness, and so decided to try to give the tools a fair shake, and by the time I had decided to do so, the capabilities had expanded so much that I genuinely became impressed, and the more I stress tested them the more I began to gain a more nuanced understanding where there are serious traps and limits and serious problems with how the industry is going but just because a tool is not perfectly reliable does not mean it isn't very useful sometimes.


"Agreed. And I mean, I think the nuanced investigations of what AI is and is not good for or capable of, and how it might or might not be made sustainable going forward both economically and environmentally, are a much more meaningful, interesting, and worthwhile check on the hype than dogmatic rejection, because just as hype won't convince anyone sane, neither will dogmatic rejection built on a biased accounting of what's going on with no vision of the possible futures available to us."

Totally, but I don't thinkthe average layperson, journalist or financial analyst will understand any of that nuance (nor pass that info on, because what gets clicks is outrage, and of course, Zitron sells clicks).


I guess that's fair enough, he does sort of serve a meaningful position in the ecosystem, same as Gary Marcus. I just get tired of the smug outrage that seems to almost get him off.


This sounds like the plot of Accelerando


So is writing an ultimately empty essay mostly composed of other people's quotes, thoughts, and concerns


The design of that study is pretty bad, and as a result it doesn't end up actually showing what it claims to show / what people claim it does.

https://www.fightforthehuman.com/are-developers-slowed-down-...


I don't think there is anything factually wrong with this criticism, but it largely rehashes caveats that are already well explored in the original paper, which goes through unusual lengths to clearly explain many ways the study is flawed.

The study gets so much attention since it's one of the few studies on the topic with this level of rigor on real-world scenarios, and it explains why previous studies or anecdotes may have claimed perceived increases in productivity even if there wasn't any actual increases. It clearly sets a standard that we can't just ask people if they felt more productive (or they need to feel massively more productive to clearly overcome this bias).


> it largely rehashes caveats that are already well explored in the original paper, which goes through unusual lengths to clearly explain many ways the study is flawed. ... The study gets so much attention since it's one of the few studies on the topic with this level of rigor on real-world scenarios,

Yes, but most people don't seem aware of those caveats, and this is a good summary of them, and I think it does undercut the "level of rigour" of the study. Additionally, some of what the article points out is not explicitly acknowledged and connected by the study itself.

For instance, if you actually split up the tasks by type, some tasks show a speed up and some show a slowdown, and the qualitative comments by developers about where they thought AI was good/bad aligned very well with which saw what results.

Or (iirc) the fact that the task timing was per task, but developer's post hoc assessments were a prediction of how much they thought they were sped up on average across all tasks, meaning it's not really comparing the same things when comparing how developers felt vs how things actually went.

Or the fact that developers were actually no less accurate in predicting times to task completion overall wrt to AI vs non-AI.

> and it explains why previous studies or anecdotes may have claimed perceived increases in productivity even if there wasn't any actual increases.

Framing it that way assumes as an already established fact that needs to be explained that AI does not provide more productivity Which actually demonstrates, inadvertently, why the study is so popular! People want it to be true, so even if the study is so chock full of caveats that it can't really prove that fact let alone explain it, people appeal to it anyway.

> It clearly sets a standard that we can't just ask people if they felt more productive

Like we do for literally every other technological tool we use in software?

> (or they need to feel massively more productive to clearly overcome this bias).

All of this assumes a definition of productivity that's based on time per work unit done, instead of perhaps the amount of effort required to get a unit of work done, or the extra time for testing, documentation, shoring up edge cases, polishing features, that better tools allow. Or the ability to overcome dread and procrastination that comes from dealing with rote, boilerplate tasks. AI makes me so much more productive that friends and my wife have commented on it explicitly without needing to be prompted, for a lot of reasons.


> They were not against technology; they were against technology that destroyed jobs.

They were not against technology; they were against technology that their destroyed jobs. If we had followed what they wanted, we'd still be in a semi pre industrial artisnal economy, and the worse off for it.


So you didn't read about them.

> In North West England, textile workers lacked these long-standing trade institutions and their letters composed an attempt to achieve recognition as a united body of tradespeople. As such, they were more likely to include petitions for governmental reforms, such as increased minimum wages and the cessation of child labor.

Sounds pretty modern doesn't it? unions, wages, no child-exploitation...

And the government response?

> Mill and factory owners took to shooting protesters and eventually the movement was suppressed by legal and military force, which included execution and penal transportation of accused and convicted Luddites.


It didn't really show that if you break down the data, and its methodology was pretty bad

https://www.fightforthehuman.com/are-developers-slowed-down-...


Its really cool how, despite the core chip at the heart of the Framework Desktop not being that extensible, Framework went out of their way to make the FD as extensible and modular as possible, and are fostering a community of 3D printing stuff around it.


I really don't think accelerando is a good example of positive AI super intelligence LMAO.


Indeed, I am definitely misremembering key parts of that story - the actions of the definite 'AI' SI's, at least.

I wanted to - and still do - side-step spoilers, so I'll concede an error, and bump it back up my TB(r)R list.


The lobsters were relatively benign ;D


There are reasonable ethical concerns one may have with AI (around data center impacts on communities, and the labor used to SFT and RLHF them), but these aren't:

> Commercial AI projects are frequently indulging in blatant copyright violations to train their models.

I thought we (FOSS) were anti copyright?

> Their operations are causing concerns about the huge use of energy and water.

This is massively overblown. If they'd specifically said that their concerns were around the concentrated impact of energy and water usage on specific communities, fine, but then you'd have to have ethical concerns about a lot of other tech including video streaming; but the overall energy and water usage of AI contributed to by the actual individual use of AI to, for instance, generate a PR, is completely negligible on the scale of tech products.

> The advertising and use of AI models has caused a significant harm to employees and reduction of service quality.

Is this talking about automation? You know what else automated employees and can often reduce service quality? Software.

> LLMs have been empowering all kinds of spam and scam efforts.

So did email.


I get why water use is the sort of nonsense that spreads around mainstream social media, but it baffles me how a whole council of nerds would pass a vote on a policy that includes that line.


To be completely fair, AI really does use more water than other typical compute tasks, because AI takes A LOT of compute.

No, it's not like email, or a web server. I can run an email server or apache on my rinky dink computer and get hundreds of requests per second.

I can't run chatgpt, that requires a super computer. And of the stuff I can run, like deepseek, I'm getting very few tokens/s. Not requests! Tokens!

Yes, inference has an energy cost that is significantly more than other compute tasks.


The energy use claims are questionable, but I at least get where they're coming from. The water use is the confusing part. Who looks at a server rack and goes ‘darn, look at how water intensive this is’? People use water as a coolant in large part because it's really hard to boil, plus it's typically cheap because it regularly gets delivered to your front door for free.

As to actual numbers, they're not that hard to crunch, but we have a few good sources that have done so for us.

Simple first-principles estimate: https://epoch.ai/gradient-updates/how-much-energy-does-chatg...

Google report: https://arxiv.org/abs/2508.15734

Altman claim inside a blog post: https://blog.samaltman.com/the-gentle-singularity


Because it is ideologically motivated.


Being ideologically motivated is not necessarily bad (understanding ideology as a worldview associated with a set of values and priorities). FOSS as a whole is deeply ideologically motivated from its origins. The issue is that there seems to have been a change in the nature of the ideology, leading to some amount of conflict between the older and newer guard.


What change do you mean between older and newer guard?


That's not something that fits in a comment. The point is, ideology as such, is not something new in this space.


Gentoo should be able to be ideological without being stupid.


>> Commercial AI projects are frequently indulging in blatant copyright violations to train their models. > I thought we (FOSS) were anti copyright?

Absolutely not! Every major FOSS license has copyright as its enforcement method -- "if you don't do X (share code with customers, etc depending on license) you lose the right to copy the code"


> I thought we (FOSS) were anti copyright?

For Free Software, copyright creates the ability to use licenses (like the GPL) to ensure source code availability.


>> Commercial AI projects are frequently indulging in blatant copyright violations to train their models.

> I thought we (FOSS) were anti copyright?

No free and open source software (FOSS) distribution model is "anti-copyright." Quite to the contrary, FOSS licenses are well defined[0] and either address copyright directly or rely on copyright being retained by the original author.

0 - https://opensource.org/licenses


Some of the ideas behind the GPL could be anti-copyright, insofar as the concept they’d love to see is software being uncopyrightable.



>I thought we (FOSS) were anti copyright?

FOSS still has to exist within the rules of the system the planet operates under. You can't just say "I downloaded that movie, but I'm a Linux user so I don't believe in copyright" and get away with it

>the overall energy and water usage of AI contributed to by the actual individual use of AI to, for instance, generate a PR, is completely negligible on the scale of tech products.

[citation needed]

>Is this talking about automation? You know what else automated employees and can often reduce service quality? Software.

Disingenuous strawman. Tech CEO's and the like have been exuberant at the idea that "AI" will replace human labor. The entire end-goal of companies like OpenAI is to create a "super-intelligence" that will then generate a return. By definition the AI would be performing labor (services) for capital, outcompeting humans to do so. Unless OpenAI wants it to just hack every bank account on Earth and transfer it all to them instead? Or something equally farcical

>So did email.

"We should improve society somewhat"

"Ah, but you participate in society! Curious!"


> the overall energy and water usage of AI contributed to by the actual individual use of AI to, for instance, generate a PR, is completely negligible on the scale of tech products.

10 GPT prompts take the same energy as a wifi router operating for 30 minutes.

If Gentoo were so concerned for the environment, they would have more mileage forbidding PRs from people who took a 10 hour flight. These flights, per person, emit as much carbon as a million prompts.


> [citation needed]

Sure, here ya go:

https://andymasley.substack.com/p/individual-ai-use-is-not-b...

https://blog.giovanh.com/blog/2024/08/18/is-ai-eating-all-th...

https://blog.giovanh.com/blog/2024/09/09/is-ai-eating-all-th...

The first comprehensive environmental audit and analysis performed in conjunction with French environmental agencies and audit environmental audit consultants, which includes every stage of the supply chain, including usually hidden upstream costs: https://mistral.ai/news/our-contribution-to-a-global-environ...

https://andymasley.substack.com/p/for-the-climate-little-thi...

> Disingenuous strawman. Tech CEO's and the like have been exuberant at the idea that "AI" will replace human labor. The entire end-goal of companies like OpenAI is to create a "super-intelligence" that will then generate a return. By definition the AI would be performing labor (services) for capital, outcompeting humans to do so

Isn't that literally the selling point of software, performing something that would otherwise have to be done by humans, namely both keeping calculating research, locating things, transferring information, and so on, using capital instead of labor, transforming labor into capital, and providing more profits as a result?

> Unless OpenAI wants it to just hack every bank account on Earth and transfer it all to them instead? Or something equally farcical

It's extremely funny that you pull this out of your house and say that this is the only way that I could be justified in saying what I'm saying, while accusing me of making a disingenuous straw man. Consult the rod in your own eye before you concern yourself with the speck in mind.

> >So did email.

> "We should improve society somewhat"

> "Ah, but you participate in society! Curious!"

Disengenuous strawman. That comic is used to respond to people that claim that you can't be against something if you also participate in it out of necessity. That's not what I'm doing. I would be fine with it if they blank it condemned all things that enable spam on a massive level, including email, social media, automated phone calls, mail, and so on, while still using those technologies because they have to to live in present society and get the word out. There are people who do that with rigorous intellectual consistency and have been since those things existed. My argument is by condemning one but not the other irrespective of whether they use them or not, they are being ethically inconsistent and it shows a double standard and a biased towards technologies that they're used to over technologies that they aren't. It shows a fundamental reactionary conservatism over an actually well thought through ethical position.


This would honestly be really awesome. I barely type when I can voice dictate at this point, even though I'm a very fast and comfortable typist, especially with my ergonomic keyboard, but sometimes voice dictation just really is awkward or not an option, and this would completely solve that problem. Not to mention, it would solve the problem of background noise. Funnily enough, in all of the science fiction novels that I write, the main interface characters have with their computers — which I represent as augmented reality glasses similar to Meta's Orion Project — is subvocalization.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: