It seems like a solution in search of a problem to begin with. Sure maybe someday that make sense. But with everything being built here on Earth now at such a massive expense, sending it into space seems like you are going to exponentially increase of the costs of something that is already incinerating massive amounts of cash, and there doesn't seem to be a capacity need for it.
The article is actually about all the debt. GPU backed loans which are highly questionable in terms of collateral to back lending.
“Nvidia has plowed plenty of money into the AI space, with more than 70 investments in AI companies just this year, according to PitchBook data. Among the billions it’s splashed out, there’s one important category: neoclouds, as exemplified by CoreWeave, the publicly traded, debt-laden company premised on the bet that we will continue building data centers forever.
CoreWeave and its ilk have turned around and taken out debt to buy Nvidia chips to put in their data centers, putting up the chips themselves as loan collateral — and in the process effectively turning $1 in Nvidia investment into $5 in Nvidia purchases. This is great for Nvidia. I’m not convinced it’s great for anyone else.”
The article is actually about all the debt. GPU backed loans which are highly questionable in terms of collateral to back lending.
It has to do with how long GPUs stay useful when newer, more efficient ones come out.
If the newer GPUs make the older GPUs not profitable anymore, wouldn't companies rush to order new GPUs, benefiting Nvidia?
I don't see how Nvidia loses in this scenario.
The only way this can all "collapse" is overbuilt capacity and/or AI not being as useful. Right now, everyone is saying they don't have enough compute including Google, OpenAI, Anthropic, etc. China is desperate for Blackwell GPUs but can't get any. Every Chinese frontier lab is saying they they have the software architecture to match the US but not enough compute.
As a result, the completion date for the huge data-center cluster, consisting of about 260 megawatts of computing power that CoreWeave plans to lease to OpenAI, has been pushed back several months. There were additional delays caused by revisions to design plans for some of the data centers a partner is building for CoreWeave in Texas and elsewhere, according to filings.
H200 is the previous generation expansion card (PCIe). The current B200 is ~2x in many ways. Note there is no mention of GH200 (integrated GPU-CPU) and all those fancy
NVIDIA servers.
OTOH, DeepSeek squeezed well their impaired bandwidth H800 cluster. I hope they get a lot of H200s.
reply