How many times can we read the same article and have the same discussion? If AI is not useful, people will not use the product or feature and it will die. If it is useful, people will use it.
You're missing the possibility of AI having gotten too big to fail.
If too many careers are tied to AI succeeding, accepting its failure is no longer an option for the company. It if far more attractive to keep shoveling it into more and more places in a desperate attempt to find a use case than to accept you've wasted hundreds of millions of dollars on a hype.
Combine this with AI being used in places where quality is highly subjective and not directly tied to the KPIs business people care about (like Google Search summaries, where the actual product is eyeballs on ads), and we might be stuck with it despite a lack of usefulness.
If AI features are costing hundreds of millions of dollars and not providing any value, then it is a great opportunity to begin a competing company and sell a cleaner, cheaper, better product.
You can't opt out. Or when you can it's increasingly difficult. The AI features on most products activate themselves automatically and shove themselves on the screen. AI slop books, art, and music are flooding platforms making it incredibly difficult to filter out. There was a case where a physical book on mushroom identification was AI generated and filled with hallucinated highly dangerous advice.
A lot of these tools are also profitable for the user while being a net negative to society. Flooding platforms with slop can make you a quick dollar while ruining the platform for everyone else.
Oh, please. The market for email apps and social media or whatever people are worried about is adding is more than sufficiently free to allow people to switch to non-AI apps.
Social media is some of the least free of all markets, because the whole point of it is to be where the people you want to interact with are.
If your entire family is on Facebook, unless you want to deliberately cut yourself off from them, you're going to be on Facebook.
If we had a dozen different fully interoperable social networks (using atproto or some other federating protocol), then you might have a point. But that is not remotely the world we live in.
And if Google is putting LLM features in Gmail...they're not just the "email app" you use, they hold your email address. You'd need to completely change over your email address with everyone who uses it in order to fully get away from them. (Similarly with Microsoft's email...I don't know if they're putting LLM features in it yet, but if not I'd bet they will soon...)
Those are two different definitions of "free market".
The definition you are using is a very strict one, and is only really useful for comparing "market economies" to "command economies". That is clearly not what we are talking about here.
The definition I am using is of an "ideal free market", which is what is required for any of the "free market theory" consequences to come into play. That includes things like revealed preferences theory. An "ideal free market" requires no friction, perfect information, perfect elasticity, etc.
Without that, the idea that you can tell what people "really want" in social media based on which networks they use is completely false.
AI is currently in a bubble being propped up by a snake eating itself in funding - the market has no control until the bubble pops. Bagholders beware my 2026 puts are getting warm.
I think it's pretty much indisputable that AI is currently in a bubble and our economy will, eventually, crash. I mean the S&P 500 is, what, 10% nvidia? And all these companies are just... buying from each other and then giving it back. It's some of the most blatant stock engineering I've ever seen.
What we don't know, of course, is when that will happen. Because a lot of people have a real monetary incentive to pretend it isn't happening.