Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems obvious.

AI is useful. But it's not trillion-dollars useful, and it probably won't be.



Why is that obvious? Even with effectively complete stagnation and just existing technology + limited RLVR, I can see how this could be trillion-dollars level useful.


It is the financial risk that is obvious. The big players are struggling to show meaningful revenue from the investment. Because the investment is so high, the revenue numbers need to be equally high, and growing fast. The 'correction' is when (ok, if) the markets realise that the returns aren't there. The worldwide risk is that AI-led growth has been a large chunk of the US stock market growth. If it 'corrects' US growth disappears overnight and takes everyone down with it. It is not an issue about the usefulness of AI, but the returns on investment and the market shocks caused by such large sums of money sloshing around one market.


I think we have only scratched the surface of what we can do with the existing technology. A much more present risk from stagnation IMO is that if we stagnate, it is almost certain that the value of the tech will not be able to be enclosed /captured by its creators.


Imho it will take off in animation/illustration as soon as Adobe (or some competitor) figures out how to make good tooling for artists. Not for idiot wantrepeneurs who want to dump fully-generated-slop onto Amazon, but so that a person can draw rough pencil sketches and storyboards and reference character sheets and get back proper illustrations. Basically, don't replace the penciler but replace the inker and the colourist (and, in animation, the in-betweener).

That's more of a UI problem than a limitation in Diffusion tech.

That's a customer who'll pay, it might be worth a lot. But a $trillion per year?


There's a free addon for free Krita that did pretty much that when I tried it, last year.

The glaring issue with it back then was that unlike an LLM that can be understanding of what you try to explain and bit more consistent the diffusion models ability to read and understand your prompt wasn't really there yet, you're more shotgunning keywords and hope the seed lottery gives you something nice.

But recent image generation models are significantly better in stable output. Something like qwen image will care a lot more about your prompt and not entirely redraw the scene into something else just because you change the seed.

Meaning that the UI experiments already exist but the models are still a bit away from maturity.

On the other hand, when looking at how models are actually evolving I'm not entirely convinced we'll need particularly many classically trained artists in roles where they draw static images with some AI acceleration. I expect people to talk to an LLM interface that can take the dumbest of instructions and carefully adjust a picture, sound, music or an entire two hour movie. Where the artist would benefit more by knowing the terminology and the granular abilities of the system than by being able to hold a pencil.

The entertainment and media industry is worth trillions on an annual basis, if AI can eat a fraction of that in addition to some other work-roles it will easily be worth the current valuations.


> big players are struggling to show meaningful revenue from the investment

ChatGPT's $10b per year is not insignificant tho.


It is when compared with their capex, and where is that revenue coming from? It’s predominantly coming from other AI hopefuls incinerating capital.


> where is that revenue coming from?

800M active users aka 10% of worlds population.


Of which a small minority are paying subscribers.


Math says about 10%


Complete stagnation would mean that hundreds of billions earmarked for datacenter and chip production in the next few years would have to be cancelled.

The promise of this future demand is what is driving the inflation of the stock market, with investors happy to ignore the deep losses accruing to every AI software player...for now. Pulling the plug on the capacity-building deals is effectively an admission that demand was overestimated, and the market will tank accordingly.

It says it all about current market mania that Nvidia (who sells most of the future chip capacity) is valued at $4 trillion, more than every publicly traded pharmaceutical company (who have decades of predictable future cash flows) combined.


At least Nvidia is making money now. Just look at Tesla, also over a trillion doing exactly what and promising exactly what again? Wait for time of them not delivering coinciding at same time as AI and it is even deeper hole...


Considering the market cap of Tesla in light of what the company produces is the same mistake as anthropomorphizing Larry Ellison. It's like how the price of Dogecoin isn't related to how in vogue doge memes are.


The existing technology can’t even replace customer support systems, which seems like the lowest bar for a role that’s perfectly well suited to LLMs. How are you justifying the trillion dollar value?


I disagree that customer support is the lowest bar for LLMs. Companies have been trying to reduce customer support spend for decades, and yet it still exists. Why? Because the types of questions and types of callers that fall on-to the remaining customer support requests are not easy to automate. Either the question itself is a complex edge-case that requires human intervention, or the person calling wants to talk to a human and good documentation was not going to change their action.


I think with a bit of engineering, the existing tech can replace customer support systems - especially as the boomers are going away. But I realize this is an uphill battle on HN


> I think with a bit of engineering, the existing tech can replace customer support system

That's the lowest of the low and even you accept it doesn't work (yet), how can LLMs be worth 50% of the last years of gdp growth if it's that bad. Do you think customer support represents 50% of newly created value ? I bet it isn't event .5%


But the point is the tech obviously isn't there yet. LLMs are still too prone to giving falsehoods and in that case a raw text-search of the support DB would be more useful anyways.

Maybe if companies would wire up their "oh a customer is complaining try and talk them out of canceling their account offer them a mild discount in exchange for locking in for a year contract" API to the LLM? Okay, but that's not a trillion-dollar service.


I can't think of any tech with this kind of crazy yearly investment in infrastructure with no success stories.

Maybe it's because I find writing easy, but I find the text generation broadly useless except for scamming. The search capabilities are interesting but the falsehoods that come from LLM questions undermine it.

The programming and visual art capabilities are most impressive to me... but where's the companies making killings on those? Where's the animation studio cranking out Pixar-quality movies as weekly episodes?


The animation stuff is about to happen but not there yet.

I work in the industry and I know that ad agencies are already moving onto AI gen for social ads.

For VFX and films the tech is not there yet, since OpenAI believes they can build the next TikTok on AI (a proposition being tested now) and Google is just being Google - building amazing tools but with little understanding (so far) of how to deploy them on the market.

Still Google is likely ahead in building tools that are being used (Nano Banana and Veo 3) while the Chinese open source labs are delivering impressive stuff that you run locally or increasingly on a rented H100 on the cloud.


> Where's the animation studio cranking out Pixar-quality movies as weekly episodes?

Check out Neural Viz. Unthinkable for one guy without AI. And we're still in the Geocities stage of this stuff.

A non-Pixar animation studio, with presumably a >10x lower budget than Pixar itself, cranking out weekly Pixar movies would be like >1000x acceleration. And indeed, that's not a thing yet for animated movies. The example I gave shows that it's already quite a big X, though.


You can easily google "generative AI success stories" and read about them.

There are always a few comments that make it seem like LLMs have done nothing valuable despite massive levels of adoption.


I realize this is a cheap shot but

> You can easily google "generative AI success stories" and read about them.

notice you suggested asking Google and not chatgpt.


I don't understand why you think this is important to mention.

Search engines are better at certain tasks than others.

If I said should FLY to Spain is it a cheap shot against sailing because I didn't mention it?


The monetization behind AI is on shakey grounds. Nobody is actually making any money off of it, and when they propose how to make money, we all get very scared.

It's either world-ending hard to believe conjecture, like the death of scarcity, or it's... ads. Ads. You know, the thing we're already doing?

So, it's not looking great. Maybe we will find monetization strategies, but they're certainly not present now, even by the largest players with the most to lose.


Because it primarily replaces existing value rather creating new value worth $1T.


What creates more value - 1 developer or 1 developer working at 10x pace?


First of all, 1 dev _producing_ at 10x pace is a myth.

But second of all, companies do not need to 10x their software production output. Rather the goal, if the 10x productivity is achieved, is to _reduce_ human labor while retaining desired levels of output.

If ultimately you're replacing humans with AI agents, you're exchanging one value for another.


I’m sorry, what? Companies do not need features? Next thing you’ll tell me they need to improve code quality?


Where is all the productivity ? Everyone says they became a 100x employee thanks to LLMs yet not one company has seen any out of the ordinary growth or profit besides AI hyped companies.

What if the amount of slop generated counteracts the amount of productivity gained ? For every line of code it writes it also writes some BS paragraph in a business plan, a report, &c.


How are you evaluating the phrase "yet not one company?"


I’m a heavy LLM user, and that has probably made me a 1.05x employee on a good day.


> But it's not trillion-dollars useful, and it probably won't be.

The market disagrees.

But if you are sure of this, please show your positions. Then we can see how deeply you believe it.

My guess is you’re short the most AI-exposed companies if you think they’re overvalued? Hedged maybe? You’ve found a clever way to invest in bankruptcy law firms that handle tech liquidations?


Have you ever heard that "the market can stay irrational longer that you can stay solvent"?

The thing about bubbles is, you can often easily spot them, but can't so easily say when they'll pop.


No. Then you haven’t spotted a bubble.

You’ve just made a comment that “wow, things are going up!” That’s not spotting bubble, that’s my non-technical uncle commenting at a dinner party, “wow this bitcoin thing sure is crazy huh?”

Talk is cheap. You learn what someone really believes by what they put their money in. If you really believe we’re in a bubble, truly believe it based on your deep understanding of the market, then you surely have invested that way.

If not, it’s just idle talk.


I truly believe we are in a bubble. I truly believe that AI will exist on the other side of that bubble, just as internet companies and banks existed on the other side of the dotcom crash and the housing crisis.

I don't know how to invest to avoid this bubble. My money is where my mouth is. My investments are conservative and long-term. Most in equity index funds, some bonds, Vanguard mutual funds, a few hand-picked stocks.

No interest in shorting the market or trying to time the crash. I would say I 90% believe a correction of 25% or more will happen in the next 12 months. No idea where my money might be safe. Palantir? Northrup Grumman?


Surely you can spot a bubble if you see that it is rapidly expanding and ultimately unsustainable. Being able to predict when it finally pops would be equivalent to winning a lottery and people would be able to make a lot of money from that, but ultimately no-one can reliably predict when a bubble will pop - doesn't mean that they weren't bubbles.


That’s silly. I can spot a crashing plane (engines on fire, wings torn half off) without being able to predict where and exactly when it’ll crash.

We can spot a bubble without being able to predict when it’ll pop.


One can be skeptical about the overall value of various technologies while also being conservative about specific bets in specific timeframes against them.


I think you’re making my point without realizing it.

If you are skeptical but also not willing to place a bet, you shouldn’t say “AI is overvalued” because you don’t actually believe it. You should say, “I think it might be overvalued, but I’m not really sure? And I don’t have enough experience in markets or confidence to make a bet on it, so I will go with everyone else’s sentiment and make the ‘safe’ bet of being long the market. But like… something feels weird to me about how much money is being poured into this? But I can’t say for sure whether it is overvalued or not.”

Those are two wildly different things.


Not at all. I may think $TECH is overvalued but some companies may well make it out the other side, some aspects of the $TECH may play out (or not), and the bubble may pop in 1 year or 5. So the sensible process may be to invest in broader indexes and let things play out at the more micro level (that may not be possible to invest in anyway).

I certainly had unease about the dot-com market and should have shifted more investments to the conservative side. But I made the "‘safe’ bet of being long the market" even after things started going south.

FWIW, I do think AI is overvalued for the relatively near term. But I'm not sure what to do about that other than being fairly conservatively invested which makes sense for me at this point anyway.


I generally buy index funds but I put some into AMD a while back as the "less-AI-part-of-tech". Will probably get out of that as they've been sucked into that vortex and shift more into global indexes instead of CAN/USA.

I'll leave shorting to the pros. The whole "double-your-money-or-infinite-losses" aspect of shorting is not a game I'm into.


You could just buy put options.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: