I think you may be suffering from AI psychosis. There have been a ton of cases of this, especially with ChatGPT, recently, and this bears all the hallmarks. Here are some resources:
CapacitorJS makes for a REALLY awesome app dev experience compared with React Native. It's basically just a really well integrated system for building exactly what you describe. The company I work at made the switch from an RN app to a CJS one and it was might and day in so many ways, performance included!
Well not surprising. National security/national infra is involved. Private companies will not fund/build nukes, missiles, aircraft carriers, fighter jets etc if there aren't certain guarantees from the State. Not just in US, all over the world.
Think about what would have happen if the Manhattan Project was run by the private sector. As soon as national security/infra becomes a risk the state will start getting more involved whether the private sector wants it or not.
The banking system/telcos/utilities in most countries have the State providing some kind guarantee to keep the lights on in case something unpredictable happens.
I can't read your hyperbolically titled paywalled medium post, so idk if it has data I'm not aware of or is just rehashing the same stats about OpenAI & co currently losing money (mostly due to training and free users) but here's a non paywalled blog post that I personally found convincing: https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...
Nothing on infra costs, hardware throughput + capacity (accounting for hidden tokens) & depreciation, just a blind faith that pricing by providers "covers all costs and more". Naive estimate of 1000 tokens per search using some simplistic queries, exactly the kind of usage you don't need or want an LLM for. LLMs excel in complex queries with complex and long output. Doesn't account at all for chain-of-thought (hidden tokens) that count as output tokens by the providers but are not present in the output (surprise).
Completely skips the fact the vast majority of paid LLM users use fixed subscription pricing precisely because the API pay-per-use version would be multiples more expensive and therefore not economical.
> Nothing on infra costs, hardware throughput + capacity (accounting for hidden tokens) & depreciation
That's because it's coming at things from the other end: since we can't be sure exactly what companies are doing, we're just going to look at the actual market incentives and pricing available and try to work backwards from there. And to be fair, it also cites, for instance, deepseek's paper where they talk about what their power foot margins are on inference.
> just a blind faith that pricing by providers "covers all costs and more".
It's not blind faith. I think they make a really good argument for why the pricing by providers almost certainly does cover all the costs and more. Again, including citing white papers by some of those providers.
> Naive estimate of 1000 tokens per search using some simplistic queries, exactly the kind of usage you don't need or want an LLM for.
Those token estimates were for comparing to search pricing to establish whether — relative to other things on the market — LLMs were expensive, so obviously they wanted to choose something where the domain is similar to search. That wasn't for determining whether inference was profitable or not in itself, and has absolutely no bearing on that.
> Doesn't account at all for chain-of-thought (hidden tokens) that count as output tokens by the providers but are not present in the output (surprise).
Most open-source providers provide thinking tokens in the output. Just separated by some tokens so that UI and agent software can separate it out if they want to. I believe the number of thinking tokens that Claude and GPT-5 use can be known as well: https://www.augmentcode.com/blog/developers-are-choosing-old... typically, chain of thought tokens are also factored into API pricing in terms of what tokens you're charged for. So I have no idea what this point is supposed to mean.
> Completely skips the fact the vast majority of paid LLM users use fixed subscription pricing precisely because the API pay-per-use version would be multiples more expensive and therefore not economical.
That doesn't mean that selling inference by subscription isn't profitable either! This is a common misunderstanding of how subscriptions work. With these AI inference subscriptions, your usage is capped to ensure that the company doesn't lose too much money on you. And then the goal is with the subscriptions that most people who have a subscription will end up on average using less inference than they paid for in order to pay for those who use more so that it will equal out. And that's assuming that the upper limit on the subscription usage is actually more costly than the subscription being paid itself, and that's a pretty big assumption.
If you want something that factors in subscriptions and also does the sort of first principles analysis you want, this is a good article:
And in my opinion it seems pretty clear that basically everyone who does any kind of analysis whether black box or first principles on this comes to the conclusion that you can very easily make money on inference. The only people coming to any other conclusion are those that just look at the finances of U.S. AI companies and draw conclusions on that without doing any kind of more detailed breakdown — exactly like the article you linked me, which now I have finally been able to read, thanks to someone posting the archive link, which isn't actually making any kind of case about the subscription or unit economics of token inference whatsoever, but is instead just basing its case on the massive overinvestment of specifically open AI into gigantic hyperscale data centers, which is unrelated to the specific economics of AI itself.
Ooh, that looks very cool. The lack of a concrete definition of AGI and a scientifically (in the correct domains) backed operationalization of such a definition that can allow direct comparisons between humans and current AIs, where it isn't impossible for humans and/or easy to saturate by AIs, is much needed.
> This isn't like the early days of the web, or Amazon, or any of those other big winners that lost money before becoming profitable. Those were all propositions with excellent "unit economics" – they got cheaper with every successive technological generation, and the more customers they added, the more profitable they became. AI companies have – in the memorable phraseology of Ed Zitron – "dogshit unit-economics." Each generation of AI has been vastly more expensive than the previous one, and each new AI customer makes the AI companies lose more money...
See, I think this is wrong. The unit economics of LLMs are great, and more than that, they have a fuckton of users with obvious paths to funding for those users that aren't paying per unit (https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...). The problem is the ludicrous up front over infestment, none of which was actually necessary to get to useful foundation models, as we saw with DeepSeek.
reply