Perhaps related? My main fiber WAN went out few hrs ago, failing over to Starlink backup. Discovered it’s a cloudflare issue, as my multi-wan setup tests against 1.1.1.1, which suddenly stopped responding (but only from my fiber ISP). Switched to testing 8.8.8.8 to restore.
If it weren’t for recent cloudflare outages, never would have considered this was the problem.
Even until I saw this, I assumed it was an ISP issue, since Starlink still worked using 1.1.1.1. Now I’m thinking it’s a cloudflare routing problem?
Steam machine so close to perfect, but 1x USBC and 1GB Ethernet are huge misses for a 2026 device. Also needs more VRAM. May be better to just do custom SFF build.
I'm confused about language, as "loans" to me do not equal "bailout". The equating of the two seems odd, as many government incentives use loans that pay back with high interest, so governments MAKE money on those kinds of deals.
Also clear that the 1.4T figure includes some accounting for spend that does not come directly from OpenAI (grid/power/data infra for example). Obviously some government involvement is needed, but more at EPA/State/Local level to fast track construction permits, more-so than financial help from Treasury.
I'm confused why this generates such sensational headlines.
I'm with you on that - people use the wrong terms. Bailouts are supporting things like GM or failing banks because the government is worried about GM workers losing jobs or bank depositors losing money.
Altman's 1.4T isn't like that - it's a proposed new investment in stuff that doesn't exist yet and there would be no job losses or the like if it fails to exist. They have been talking about potential government support for the new ventures, partly to keep up with China which uses similar government support. I'm not sure if it's a good idea but it would not be a bailout, more a subsidy.
This is the same kind of bullshit rationalization they used to say that the bank bailouts of 2009 weren't really bank bailouts.
They were bank bailouts.
Unsecured government loans are either bailouts, entitlements in disguise, or (usually misguided) attempts at broad economic stimulus. This definitely isn't either of the latter two.
How can Tesla advertise a “more accurate” number if they are required by regulation to use EPA estimate?
EPA range estimates being inaccurate is a real problem. They do not, and are not designed to, give actual expected range. It’s meant to be an “average” of “mixed” driving.
Take latest Model Y as example. If you compare EPA range vs WLTP (commonly used in EU)
327mi EPA est. (526km) US version (long range) /
586km WLTP est. (364mi) EU version (long range)
The WLTP is “average” as well, so which of these is more accurate?
This problem is not unique to Teslas, and actually not unique to EVs either. It’s just more noticeable, as ICE vehicles usually advertise MPG and tank size, not total range. So EVs suffer from their own advertisement highlighting numbers that will never be accurate.
They are allowed to advertise lower numbers than the EPA, and are also allowed to use different tests. Tesla typically uses the test that is most favorible to their own range rating.
Some other manufacturers go to a lot of effort to make sure that they aren't overstating things (eg, Porsche), but you are right that this isn't the norm.
> How can Tesla advertise a “more accurate” number if they are required by regulation to use EPA estimate?
By also providing the worst case scenario numbers in addition to the EPA numbers. Tesla could simply do a highway range test at 70mph, ideally in Winter:
There's nothing stopping Tesla showing these things.
The one time Tesla did a towing demonstration those numbers turned out to be lies. Tesla never ran the quarter mile that they claimed to. When even your engineers lack basic honesty you've got a sick company culture:
The worst case scenario is pretty much unbounded. 80mph range will be worse than 70mph. But it's still better than range at 90mph or 100mph.
I guess you could use the highest legal speed limit in the US alongside the lowest temp and fastest headwinds ever recorded in Texas. In conjunction with the heaviest, least aerodynamic thing that the vehicle can physically tow.
But that may be annoying to replicate in a controlled setting and will be even less relevant to most people than the EPA distance.
> EPA range estimates being inaccurate is a real problem. They do not, and are not designed to, give actual expected range. It’s meant to be an “average” of “mixed” driving.
Also, EPA ranges expect mostly constant speed and driving within the speed limit, neither of which matches real world driving
We clearly have very different definitions of corruption. Actual aeronautics engineers with hands-on expertise offering advice on how to modernize air safety systems does not fall within my definition of corruption.
That technical nitpick aside, this is a classic business move whereby a preferred and privileged private operator gets exclusive ground floor access to get in on a leech like remora attachment to a spigot of public funds for decades.
That's the corruption part, not the touchy feely first phase "just some experts having a look".
You seem to be assuming that the full cost of the cluster is recouped by Grok 3. The real value will be in grok 5, 6, etc…
xAI also announced a few days ago they are starting an internal video game studio. How long before AI companies take over Hollywood and Disney? The value available to be captured is massive.
The cluster they’ve built is impressive compared to the competition, and grok 3 barely scratches what it’s capable of.
Yes. Why do get these replies on HN that seem to only consider the most shallow, surface details? It could well be that xAI wins the AI race by betting on hardware first and foremost - new ideas are quickly copied by everyone, but a compute edge is hard to match.
The compute edge belongs to those like Google (TPU) and Amazon/Anthropic (Trainium) building their own accelerators and not paying NVIDIAs 1000% cost markups. Microsoft just announced experimenting with Cerebras wafer scale chips for LLM inference which are also a cost savings.
Microsoft is in process of building optical links between existing datacenters to create meta-clusters, and I'd expect that others like Amazon and Meta may be doing the same.
Of course for Musk this is an irrational ego-driven pursuit, so he can throw as much money at it as he has available, but trying to sell AI when you're paying 10x the competition for FLOPs seems problematic, even you you are capable of building a competitive product.
DeepSeek just showed the compute edge is not that hard to match. They could have chosen to keep the gains proprietary but probably made good money playing the market instead, quants as they are.
If you’re using your compute capacity at 1.25% efficiency, you are not going to win because your iteration time is just going to be too long to stay competitive.
Software and algorithmic improvements diffuse faster than hardware, even with attempts to keep them secret. Maybe a company doubles the efficiency, but in 3 months, it's leaked and everyone is using it. And then the compute edge becomes that much more durable.
They achieved the same results for 1.25% of the computation cost... If they actually had that computation capacity, it would be game over with the AGI race by the same logic.
xAI bought hardware off the open market. Their compute edge could dissappear in a month if Google or Amazon wanted to raise their compute by a whole xAI
There seems to be a coordinated effort to control the narrative. Grok3's release is pretty important, no matter what you think of it, and initially this story quickly fell off the front page, likely from malicious mass flagging.
One thing that's taken over Reddit and unfortunately has spread to the rest of the internet is people thinking of themselves as online activists, who are saving the world by controlling what people can talk about and steering the conversation in the direction they want it to go. It's becoming harder and harder to have a normal conversation without someone trying to derail it with their own personal crusade.
How? After an enormous investment the latest version of some software is a bit better than the previous versions of some software from it's competitors and will likely be worse than the future versions from it's competitors. There's nothing novel about this.
NVIDIA's CEO Jensen Huang: “Building a massive [supercomputer] factory in the short time that was done, that is superhuman. There's only one person in the world who could do that. What Elon and the xAI team did is singular. Never been done before.”
Largest supercluster in the world created in a small time frame is pretty important. 4 years typically, cut down to 19 days. That's an incredible achievement and I, along with many others, think it's important.
Okay but that's obviously a nonsense claim. Find me a computer on the https://en.wikipedia.org/wiki/TOP500 that was built 4 years after the chips it uses debuted.
> There seems to be a coordinated effort to control the narrative.
Do you have any evidence for this?
Who would want to coordinate such an effort, and how would they manipulate HN users to comment/vote in a certain way?
I think it is far more plausible that some people on here have similar views.
> [people] controlling what people can talk about
That's called 'moderation' and protects communities against trolls and timewasters, no?
> and steering the conversation in the direction they want it to go
That's exactly what conversation is about, I'd say. Of course I want to talk about stuff that I am interested in, and convince others of my arguments. How is this unfortunate?
Is it? It's Yet Another LLM, barely pipping competitors at cherry picked comparisons. DeepSeek R1 was news entirely because of the minuscule resources it was trained on (with an innovative new approach), and this "pretty important" Grok release beats it in chatbox arena by a whole 3%.
We're at the point where this stuff isn't that big of news unless something really jumps ahead. Like all of the new Gemini models and approaches got zero attention on here. Which is fair because it's basically "Company with big money puts out slightly better model".
I'd say Grok 3 is getting exactly the normal attention, but there is a "Leave Britney Alone" contingent who need to run to the defence.
We have no clue how all this is going to play out, what value is captureable and what parts of a lead are likely to stay protected. This race is essentially the collective belief in a generationally big prize and no idea how it unlocks.
The problem with that for a comment section is it reduces ALL comments to gossip and guessing, which makes people feel stupid.
Reddit today feels like it's absolutely overrun by bots. So much of the comment content is so superficial and cookie-cutter I find it hard to believe it's all produced by human beings. A lot of it reads like the output of small cheap LLMs of the sort that would be used for spam bots.
Of course we know X, Facebook, and probably most other social media is also overrun by bots. I don't think you can assume that humans are on the other end anymore.
The point is that it is inefficient. Others achieved similar results much cheaper, meaning they can go much further. Compute is important, but model architecture and compute methods still outweigh it.
How quickly will Grok 4/5/6 be released? Of course you can choose to keep running older GPUs for years, but if you want bleeding edge performance then you need to upgrade, so I'm not sure how many model generations the cost can really be spread over.
Also, what isn't clear is how RL-based reasoning model training compute requirements compares to earlier models. OpenAI have announced that GPT 4.5 will be their last non-reasoning model, so it seems we're definitely at a transition point now.
reply