You're not very off the the mark. To add in that extra detail, xAI is using portable gas turbines that are meant for providing emergency backup power in case of a catastrophic loss of power, like in the event of a natural disaster. Being portable, they lack the systems necessary to avoid polluting the surrounding air with oxides of nitrogen and formaldehyde - really nasty stuff. That shouldn't normally cause a serious issue, since the turbines are meant for temporary backup alone. But at Memphis, xAI is stretching the meaning of 'temporary'.
What are you implying? That the civilians of Germany too were involved in the Holocaust under Nazism? Sure, they hated the 'other' groups. But the Nazis had to suspend the earlier Aktion T4 after it attracted a severe revolt from the public. Learning from that experience, the Nazis took enormous efforts to keep the Holocaust out of public sight. If the German civilians had known well about it, would the allied armies have been so surprised and shocked when they discovered the concentration camps?
Don't get me wrong. The Nazis were evil to the core. What they did to the victims is unforgivable. But grouping the civilians with them is a convenient and nefarious justification for their massacre. How many of the thousands of kids among them were Nazis according to you?
Now talking about targeting the German civilians, check out the massive allied firebombings of largely undefended Hamburg (Operation Gomorrah) and Dresden. The attacks claimed the lives of 34K and 25K civilians respectively in a dreadful sequence of events. Horrific accounts and photos of the incidents exist to this day. The incidents were so controversial that even Churchill challenged it in the Parliament. See if you can stomach those accounts.
War is inherently immoral. You just don't fight one if you can. But if that's not an option, then both sides may end up committing horrible war atrocities. All you can hope for is the least bad outcome. And once it's over, you should be introspecting about what went wrong and how to avoid that in the future. For that, an honest acceptance of the barbarity of such atrocities is needed. If you glorify them instead, you aren't all that better than your enemies and you're just setting up the stage for a repeat of that horrible past. So yes, all civilians should be protected.
Somebody should also collect the statistics on how many of these aspiring founder dropouts actually succeed. I'm sure that it won't be a flattering figure.
Here are some quotes from an article [1] that directly addresses your point:
> The turbines spew nitrogen oxides, also known as NOx, at an estimated rate of 1,200 to 2,000 tons a year — far more than the gas-fired power plant across the street or the oil refinery down the road.
> The turbines are only temporary and don’t require federal permits for their emissions of NOx and other hazardous air pollutants like formaldehyde, xAI’s environmental consultant, Shannon Lynn, said during a webinar hosted by the Memphis Chamber of Commerce. The argument appears to rely on a loophole in federal regulations that environmental groups and former EPA officials say shouldn’t apply to the situation.
> Mayo and Lynn didn’t respond to calls and texts from POLITICO’s E&E News requesting comment and have not said publicly how much longer the “temporary” turbines will remain onsite. Musk did not respond to a request for comment.
As you can see, xAI is being deliberately deceptive here and this has been known, but unaddressed for a while now. Remember that we are talking about a grave threat to the health and life of the entire population of a town. That too in a country where healthcare is deliberately unaffordable to ordinary folks. I don't know if you know how nasty formaldehyde and NOx smells.
How do you so casually trivialize and vilify such concerns as 'agenda pushing'? It's very sad that HN has too many apologists for these greedy serial violators and abusers. At the same time, the sheer lack of empathy towards the unprivileged is appalling! They're humans too!
2. What causes it (the issues that makes it such a challenge)
3. How it changed over the years, and its current state
4. Any serious attempts to resolve it
I've been on Linux for may be 2 decades at this point. I haven't noticed any issues with ABI so far, perhaps because I use everything from the distro repo or build and install them using the package manager. If I don't understand it, there are surely others who want to know it too. (Not trying to brag here. I'm referring to the time I've spent on it.)
I know that this is a big ask. The best course for me is of course to research it myself. But those who know the whole history tend to have a well organized perspective of it, as well as some invaluable insights that are not recorded anywhere else. So if this describes you, please consider writing it down for others. Blog is probably the best format for this.
The kernel is stable, but all the system libraries needed to make a grapical application are not. Over the last 20 years, we've gone from GTK 2 to 4, X11 to Wayland, Qt 4 to 6, with compatibility breakages with each change. Building an unmodified 20 year old application from source is very likely to not work, running a 20 year old binary even less so.
There is no ABI problem. The problem is a lack of standardization for important APIs and infrastructure. There once was a serious effort to solve this: the Linux Standard Base: https://en.wikipedia.org/wiki/Linux_Standard_Base Standardization would of course be the only way to fix this, instead of inventing even more packaging formats which fragment the ecosystem even more. LSB died due to lack of interest. I assume also because various industrial stakeholders are more interest in gaining a little bit of control over the ecosystem than in the overall success of Linux on the desktop. The other major problem is that it is no fun to maintain software, which leads to what was described as CADT: https://www.jwz.org/doc/cadt.html As you see with Wayland and Rust rewrites CADT still continues today always justified with some bullshit arguments why the rewrites are really necessary.
Together this means that basically nobody implements applications anymore. For commercial applications that market is too fragmented and it is too much effort. Open-source applications need time to grow and if all the underpinnings get changed all the time, this is too frustrating. Only a few projects survive this, and even those struggle. For example GIMP took a decade to be ported from GTK 2 to 3.
Linux API/ABI doesn't cover the entire spectrum that Windows API covers. There is everything from lowest level kernel stuff to the desktop environment and beyond. In Linux deployments, that's achieved by a mix of different libraries from different developers and these change over time.
> Unfortunately you can't really statically link a GUI app.
But is there any fundamental reason why not?
> Also, if you happened to have linked that image to a.out it wouldn't work if
> you're using a kernel from this year, but that's probably not the case ;)
I assume you refer to the retirement of coff support (in favor of elf).
I would argue that given how long this obsolete format was supported was actually quite impressive.
The model of patching+recompiling the world for every OS release is a terrible hack that devs hate and that users hate. 99% of all people hate it because it's a crap model. Devs hate middlemen who silently fuck up their software and leave upstream with the mess, users hate being restricted to whatever software was cool and current two years ago. If they use a rolling distro, they hate the constant brokenness that comes with it. Of the 1% of people who don't hate this situation 99% of those merely tolerate it, and the rest are Debian developers who are blinded by ideology and sunk costs.
Good operating systems should:
1. Allow users to obtain software from anywhere.
2. Execute all programs that were written for previous versions reliably.
3. Not insert themselves as middlemen into user/developer transactions.
Judged from this perspective, Windows is a good OS. It doesn't nail all three all the time, but it gets the closest. Linux is a bad OS.
The answers to your questions are:
(1) It isn't backwards compatible for sophisticated GUI apps. Core APIs like the widget toolkits change their API all the time (GTK 1->2->3->4, Qt also does this). It's also not forwards compatible. Compiling the same program on a new release may yield binaries that don't run on an old release. Linux library authors don't consider this a problem, Microsoft/Apple/everyone else does. This is the origin of the glibc symbol versioning errors everyone experiences sometimes.
(2) Maintaining a stable API/ABI is not fun and requires a capitalist who says "keep app X working or else I'll fire you". The capitalist Fights For The User. Linux is a socialist/collectivist project with nobody playing this role. Distros like Red Hat clone the software ecosystem into a private space that's semi-capitalist again, and do offer stable ABIs, but their releases are just ecosystem forks and the wider issue remains.
(3) It hasn't change and it's still bad.
(4) Docker: "solves" the problem on servers by shipping the entire userspace with every app, and being itself developed by a for-profit company. Only works because servers don't need any shared services from the computer beyond opening sockets and reading/writing files, so the kernel is good enough and the kernel does maintain a stable ABI. Docker obviously doesn't help the moment you move outside the server space and coordination requirements are larger.
It seems like Linux's ethos is also its biggest problem. It's a bunch of free software people reinventing, not just the wheel, but every part of the bus. When someone shows up and wants to install a standard cup holder, it's hard when none of your bus is standard.
Maybe you are running a desktop environment which never changes but Gnome has been constantly broken in many different ways for the last 5+ years. At times it felt more like a developer playground than a usable desktop environment. KDE is more stable nowadays but it still breaks in mysterious ways from time to time.
I also had major issues for some time when Qt6 started rolling out.
And Arch itself also needs manual interventions on package updates every so often, just a few weeks ago there was a major change to the NVidia driver packaging.
I've been running GNOME. I've never had breakage from upgrading. Of course there's the fact that GNOME neutered itself, removing many of its own features, but that's a different story and has nothing to do with ABIs or upgrading.
> And Arch itself also needs manual interventions on package updates every so often, just a few weeks ago there was a major change to the NVidia driver packaging.
If you're running a proprietary driver on a 12 year old GPU architecture incapable of modern games or AI, yeah... so I actually haven't needed to care about many of these. Maybe 2 or 3 ever...
Considering that these CEOs are talking about replacing all skilled and unskilled labor under them with LLMs, I don't see why they can't be replaced too. In reality, LLMs are overhyped. Even Grok says it straight - LLMs are probability models with condensed human knowledge that decides what the next word/letter should be. Original thoughts isn't its forte.
(Surprisingly though, that's enough for them to recognize that you're a human. Their models can identify your complex thought progression in your prompts - no matter how robotic your language is.)
The REAL problem here is the hideous narrative some of these CEOs spin. They swing the LLMs around to convince everyone that they are replaceable, thereby crashing the value of the job market and increasing their own profits. At the same time, they project themselves as some sort of super-intelligent divine beings with special abilities without which the world will not progress, while in reality they maintain an exclusive club of wealthy connections that they guard jealously by ruining the opportunities for the others (the proverbial 'burning the ladder behind them'.) They use their PR resources to paint a larger-than-life image that hides the extreme destruction they leave behind in the pursuit of wealth - like hiding a hideous odor with bucketfuls of perfume. These two problems are the two sides of a coin that expose their duplicity and deception.
PS: I have to say that this doesn't apply to all CEOs. There are plenty of skilled CEOs, especially founders, who play a huge role in setting the company up. Here I'm talking about the stereotypical cosmopolitan bunch that comes to our mind when we hear that word. The ones who have no qualms in destroying the world for their enjoyment and look down upon normal people as if you're just fodder for them.
> The first part works because otherwise reusable rockets wouldn't have been invented (or maybe they'd have been invented 20 years later).
I do not want to take credit away from SpaceX in what they achieved. It sure is complex. But it's also possible to give someone excess credit by denying others what is due. I don't know which part of 'reusable rockets' you are talking about, whether it's the reusable engines and hardware or if it's the VTOL technology. But none of that was 'invented' by SpaceX. NASA had been doing that for decades before that, but never had enough funding to get it all together. Talking about reusable hardware and engines, the Space Shuttle Orbiter is an obvious example - the manned upper stage of a rocket that entered orbit and was reused multiple times for decades. SpaceX doesn't yet have an upper stage that has done that. The only starship among the 9 to even survive the reentry never entered orbit in the first place. Now comes the 'reusable engine'. Do you need a better example than the RS-25/SSME of the same orbiter? Now let's talk about VTOL rockets. Wasn't Apollo LMs able to land and takeoff vertically in the 1960s itself? NASA also had a 'Delta Clipper' experiment in the 1990s that did more or less the same thing as SpaceX grasshopper and Starship SN15 - 'propulsive hops', multiple times. Another innovation at SpaceX is the full-flow stage combustion cycle used in the Raptor engine. To date, it is the only FF-SCC engine to have operated in space. But both NASA and USSR had tested these things on the ground. Similarly, Starship's silica heat tiles are entirely of NASA heritage - something they never seem to mention in their live telecasts.
I see people berating NASA while comparing them with SpaceX. How much of a coincidence is it that the technologies used by SpaceX are something under NASA's expertise? The real engineers at SpaceX wouldn't deny those links. Many of them were veterans who worked with NASA to develop them. And that's fine. But it's very uncharitable to not credit NASA at all. The real important question right now is, how many of those veterans are left at SpaceX, improving these things? Meanwhile unlike SpaceX, NASA didn't keep getting government contracts, no matter how many times they failed. NASA would find their funding cut every time they looked like they achieved something.
> It's the same as Steve Jobs, the Android guys were still making prototypes with keyboards until they saw the all screen interface of the iPhone.
Two things that cannot be denied about Steve Jobs is that he had an impeccable aesthetic sense and an larger-than-life image needed to market his products. But nothing seen in the iPhone was new even in 2007. Full capacitive touch screens, multi-touch technology, etc were already in the market in some niche devices like PDAs. The technology wasn't advanced enough back then to bring it all together. Steve Jobs had the team and the resources needed to do it for the first times. But he didn't invent any of those. Again, this is not to take away the credit from Jobs for his leadership.
> Sometimes it requires a single individual pushing their will through an organization to get things done, and sometimes that requires lying.
This is the part I have a problem with. All the work done by the others are just neglected. All the damages done by these people are also neglected. You have no idea how many new ideas from their rivals they drive into oblivion, so as to retain their image. Leaders are a cog in the machine - just like everyone else working with him to generate the value. But this sort of hero worship by neglecting everyone else and their transgressions is a net negative for human race. They aren't some sort of divine magical beings.
I understand the issue with all the devices. But what about the rest of the things that depend on these electronics, especially DRAMs? Automotive, Aircraft, Marine vessels, ATC, Shipping coordination, traffic signalling, rail signalling, industrial control systems, public utility (power, water, sewage, etc) control systems, transmission grid control systems, HVAC and environment control systems, weather monitoring networks, disaster altering and management systems, ticketing systems, e-commerce backbones, scheduling and rostering systems, network backbones, entertainment media distribution systems, defense systems, and I don't know what else. Don't they all require DRAMs? What will happen to all of them?
Industrial microcontrollers and power electronics use older process nodes, mostly >=45nm. These customers aren’t competing for wafers from the same fabs as bleeding edge memory and TPUs.
Okay, but what about the rest? The ones that aren't embedded in someway and use industrial grade PCs/control stations? Or ones with large buffers like network routers? I'm also wondering about the supply of the alternate nodes and older technologies. Will the manufactures keep those lines running? Was it micron that abandoned the entire retail market in favor of supplying the hyperscalers?
> The ones that aren't embedded in someway and use industrial grade PCs/control stations? Or ones with large buffers like network routers?
Not sure if they require DDR5 but the AI crisis just caused the prices of DDR5 to rise but the market supply of DDR4 thus grew and that's why they got more expensive too
> I'm also wondering about the supply of the alternate nodes and older technologies.
I suppose these might be chinese companies but there might be some european/american companies (not sure) but if things continue, there is gonna be a strain on them in demand and they might increase their prices too
> Was it micron that abandoned the entire retail market in favor of supplying the hyperscalers?
That might be the case only for the infotainment system, but there’s usually many other ECUs in an EV. The ADAS ECUs are carrying similar amounts as an iPhone or the infotainment system. Telematics is also usually also a relatively complex one, but more towards lower sized amounts.
Then you have around 3-5 other midsized ECUs with relatively high memory sizes, or at least enough to require MMUs and to run more complex operating systems supporting typical AUTOSAR stacks.
And then you have all the small size ECUs controlling all small individual actuators.
But also all complex sensors like radars, cameras, lidars carry some amounts of relevant memory.
I still think your point is valid, though. There’s no difference in orders of magnitude when it comes to expensive RAM compared to an iPhone. But there’s cars also carried lots of low-speed, automotive grade memory in all the ECUs distributed throughout the vehicle.
I cannot say, depends on the vehicle. But easily 50-64GB range.
If there’s three main ECUs at 16GB each, you’re already hitting 50GB. Add 2-4GB for mid size ecus, and anything in between KBs and some MB for small ECUs.
Okay, accepted. But are you sure that the supply won't be a problem as well? I mean, even if these products choose a different process nodes compared to the hyperscalers, will the DRAM manufactures even keep those nodes running in favor of these industries?
What will probably happen is that the reselling market/2nd market of these might probably rise
> will the DRAM manufactures even keep those nodes running in favor of these industries?
Some will, Some might not, In my opinion, the longevity of these brands will only depend if they allow buying ram for the average person/consumer brands so I guess we might see new competition perhaps or give more marketshare to all the other fab companies beyond the main three of these industries.
I am sure that some company will 100% align with the consumers but the problem to me feels that they wouldn't be able to supply enough production to consumers in the first place so prices still might rise.
And those prices most likely will be paid by you in one form or another but it would be interesting to see how long the companies who buy dram from these providers or build datacenters or anything ram intensive will hold their price up, perhaps they might eat the loss short term similar to what we saw some companies do during trump tarrifs.
Self-hosted FOSS apps are probably the best push towards computing freedom and privacy today. But I wish that the self-hosting community moved towards a true distributed architecture, instead of trying to mimic the paradigms of corporate centralized software. This is not meant as a criticism against the current self-hosted architecture or the apps. But I wish the community focused on a different set of features that suite the home computing conditions more closely:
1. Peer-to-peer model of decentralization like bittorrent, instead of the client-server model. Local web UIs (like Transmission's web UI) may be served locally (either host-only or LAN-only) as frontend for these apps. Consider this as the 'last-mile connectivity' if you will.
2. Applications are resistant to outages. Obviously, home servers can't be expected to be always online. It may even be running on you regular desktops. But you shouldn't lose the utility of the service just because it goes offline. A great example of this is the email service. They can wait for up to 2 days for the destination server to show up before declaring a delivery failure. Even rejections are handled with retries minutes later.
3. The applications should be able to deal with dynamic IPs and NATs. We will probably need a cryptographic identity mechanism and a way to translate that into a connection to the correct end node. But most of these technologies exist today.
4. E2E encrypted and redundant storage and distribution servers for data that must absolutely be online all the time. Nostr relays seem like a good example.
The Solid and Nostr projects embody many of these ideas already. It just needs a bit more polish to feel natural and intuitive. One way to do it is to have a local daemon that acts as a gateway, cache and web-ui to external data.
Yeah, I have been planning to try out Iroh sometime soon. However, what I explained will take a whole lot of planning on top of Iroh. I also don't want to replicate what others have already achieved. It would be best if something could be built on top of those. Let's see how it goes.
> Sounds like you want a k3s based homelab and then connect it all with Tailscale or Netbird.
I apologize if it was confusing. I was suggesting the exact opposite. It's not about how to build a mini enterprise cluster. It's about how to change the service infrastructure to suit the small computers we usually find at homes, without any modifications. I'm suggesting a more fundamental change.
> I have reliable electricity and internet at home, though.
It isn't too bad where I'm at, either. But sadly, that isn't the practical situation elsewhere. We need to treat power and connectivity as random and intermittent.
You can do this now. It would likely require packaging your services up in Windows installers that deploy Windows services. Will run across most computers you find in homes.
> You could argue that Plex, MinIO or Mattermost is being enshittified, but definitely not self hosting as a whole.
That's probably not how you should interpret it. Self hosting as a whole is still a vastly better option. But if there is a significant enough public movement towards it, you can expect it to be targeted for enshittification too. The incidents related to Plex, MinIO and Mattermost should be taken as warning signals about what this may escalate into in the future. Here are the possible problems I foresee.
1. The situation with Plex, MinIO and Mattermost can be expected to happen more frequently. After a limit, the pain of frequent migration will become untenable. MinIO is a great example. Even the crowd on HN hadn't considered an alternative until then. Some of us learned about Garage, RustFS and Ceph S3 for the first time and we were debating about each of their pros and cons. It's very telling that that discussion was very lengthy.
2. There is a gradual nudge to move everything to the cloud and then monetize it. Mandatory online account for Win11, monetization of GH self-hosted runner (now suspended after backlash, I think) and cloudification of MS Office are good examples. You can expect a similar attempt on self hosted applications. Of course, most of our self-hosted software is currently open source. But if these big companies decide to embrace, extend and extinguish it, I'm not sure that the market will be prudent enough to stick with the FOSS options. Half of HN was fighting me a few days back when I suggested that we should strive to push the market towards serviceable modular hardware.
3. FOSS projects developed under companies are always at a higher risk of being hijacked or going rogue. To be clear, I'm not against that model. For example, I'm happy with Zulip's development and monetization model - ethical, generous and not too pushy. But mattermost shows where that can go wrong. Sure, they're are open source. But there are practical difficulties in easily overriding such issues.
4. At one time, we were expecting small form-factor headless computers (Plug computers [1]) like SheevaPlug and FreedomBox to become ubiquitous. That should still be an option, though I'm not sure where it's headed, given the current RAM situation. But even if they make a come back, it's very likely that OEMs will lock it down like smartphones today and make it difficult for you to exercise your choices of servers, if not outright restrict them. (If anybody wants to argue that normal people will never consider it, remember how smartphones were, before iPhone. We had a blackberry that was used only by a niche crowd.)
reply