Hacker Newsnew | past | comments | ask | show | jobs | submit | more Denvercoder9's commentslogin

> In AT, data is tied to identity (DID), not handles or hosting.

But there's exactly two types of identity, one of which requires you to have a domain (did:web:), and the other relies on a centralized registry owned by a third party (did:plc:). Still the exact same problem.


I am assuming that plc involves keys which are in the users control. If that's not true, all bets are off.

So I wonder if the protocol could be extended to allow migrating from a did:plc to a did:web. Or maybe to did:plcalt (alternative plc server. I just made this up but I think you get the idea).

If I understand correctly, it should be trivial to prove you own a plc identity, even if the plc is rejecting further changes to your id, so long as you can tell the network in some way an alternative place to look.

Edit: as per https://updates.microcosm.blue/3lz7nwvh4zc2u it seems like there are some theoretical mitigations to an adversarial plc, but overall it seems like if a plc was adversarial starting today it would damage the ATmosphere quite a lot.



There's no incentive. Passengers rarely know what type they'll be flying on when they book, and prioritize it over price even less. For airlines, a bigger airplane is a distinct disadvantage though, as it's operationally more expensive (increased cross section equals increased drag equals increased fuel burn).


A fully loaded 747 is extremely profitable, the large size has economies of scale. That's why the 747 was very very popular with the airlines.

So, yes, a 747 burns more fuel. But the fuel burn per paying passenger is less.


The real issue with the 747 is people will take a point to point route if at all possible. Worse, flying a small plane point to point is cheaper for the passenger than flying 2 747s. If you live in Lincoln NE - sorry your city is too small to get direct flights to anything but close major hubs (even then odds are you drive to nearby Omaha thus further reducing demand options). However if you live in a Larger non-hub city airlines can undercut each other by just doing direct flights to other large non-hub cities.


The 747 was at its most efficient when flying long haul routes, like overseas. The 747 was immensely profitable for Boeing for several decades. Every sale was a giant chunk of cash dumped on the company. But none of that would have happened if the 747 wasn't also immensely profitable for the airlines.


True but small planes are profitable too even for long flights. They have to compete against the more profitable large ones but they do that by emptying the large ones. I want to get to a destination and if a small plane isn't much more money it is cheaper to not transfer at a hub and pay for the 2nd plane to where I want to be. More smaller planes can also fit my schedule which can save money.

there is a reason nobody flies the 747 anymore. It isn't profitable enough agaisnt the 777 and small planes which are cheaper to run.


> there is a reason nobody flies the 747 anymore

The reason is the aerodynamics of it are 60 years old making it no longer competitive with modern aerodynamics.

Compare your car with a 1965 Chevy Impala, for example.


Mostly it's about engine tech. A 777/787 or whatever can fit almost as many passengers as a 747, but has only two engines, burning less fuel and requiring less maintenance.

Back when 747 was designed engine tech wasn't there yet to build really big two engine airplanes. There was also the issue of ETOPS limits. The regulations on how far away from nearest airport you can fly with two-engine aircraft were stricter than today, so for many routes flying over oceans you needed more than two engines.


There's also the issue of cargo space. The 777-300 actually has a larger hold, about 11% more. Cargo is pretty lucrative so even passenger airlines like being able to devote some of their hold space to it.


Modern wings made a huge difference. Take a close look at a modern wing vs a 747 wing.


Modern wings could be retrofitted to the 747. Maybe not completely, but the more important features. However there are a lot of other parts of the 747 that don't make sense, and so not enough buyers (if any!) would exist if they did.


The 757 started out as a re-winged and re-engined 737. It turned out to be cheaper to design a new airplane.


It was an old airplane too and not as optimized as newer airplanes in terms of engines, aerodynamic design, weight and so forth.

The A380 was a step up in size and had additional problems such as there not being that many airports which had upgraded their gates and other facilities to support an airplane that big.


As aviation technology improved, the 747 could not improve its aerodynamics and so became relatively costlier to fly. It's longevity, however, was due to it's cost effectiveness for several decades.


That only works if you make the airplane enough bigger so that you can fit more seats and thus paying passengers. The parent comment was arguing to make the plane only a little bigger so that each passenger has more space, but not enough so that extra seats can be fitted.


> Passengers rarely know what type they'll be flying on when they book, and prioritize it over price even less.

Virtually every booking page gives you that information during booking, and I (and several of my friends) actively avoid any flight that has a MAX operating it, to the point that we'd rather fly longer and/or more expensive alternative routes operated with other models.


bcachefs in the upstream kernel was explicitly marked as being experimental, you can't consider it a stable release.


> Both should be supported.

At least in KDE they are, and you can pick whether you want natural or alphabetical sorting (which has a case sensitive and insensitive variant).


> Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal?

You start the ssh client in the terminal, it opens a browser to authenticate, and once you're logged in you go back to the terminal. The usual trick to exfiltrate the authentication token from the browser is that the ssh client runs an HTTP server on localhost to which you get redirected after authenticating.


That, or the SSH client opens a separate connection to the authorization server and polls for the session state until the user has completed the process; that would be the device code grant, which would solve this scenario just fine.


You're both talking about web authentication, not HTTP authentication. cf. https://news.ycombinator.com/item?id=45399594


Only to obtain the token, the actual connection itself uses HTTP authentication (Bearer scheme).


To be clear, this is about the internal implementation in the kernel, the mmap() system call is not going anywhere.


I'm relieved, but also somewhat befuddled that someone would write such a shocking headline. It immediately had me reaching for the lkml archives to find out whats really going on.


In its defence, the headline says "file operation" rather than "syscall", which makes it slightly less egregious: it's referring to `mmap` as a member of `struct file_operations`.


The mmap syscall operates on files so it's still very easily misinterpreted


Which worked as intended; I first had a shock, did a double take, and realised there was nuance in file operation, read a little bit of the article and confirmed my suspicion it didn't have anything to do with the syscall.


mmap is POSIX, so it's not going anywhere and you can rely on it until POSIX systems are phased out or the heat death of the universe, whichever comes sooner.


Indeed. But even so, it's mildly shocking, as struct file_operations has been one of the deepest (and most promiscuously) integrated and most conservative bits of the kernel API. This stuff dates back decades and almost never changes. And there are a lot of raw file drivers[1] still there from eras most people have forgotten about.

This is a big, big reorg even for Linux.

[1] To be fair, most of which probably don't support mapping.


Yes, that's true. However, it's a kernel-internal API, and those have never been considered stable, unlike the system call ABIs, which are mostly sacrosanct. Except for, like, uselib(). This is because pretty much all the code that calls the kernel-internal APIs is in a monorepo, so you can fix it all when you make the change.

Also, it's not that the core kernel is ceasing to provide a facility that drivers depended on; rather, it's ceasing to depend on a facility that drivers provided. But doing so involves adding this new mmap_prepare() thing, which is making the kernel depend on a new facility that drivers now must provide, I guess?


thank you that was the first thing I had to check.


"We do NOT break userspace"


_shifty eyes over at cgroups_


Or the numerous syscall breakages (2.4 to 2.6 was most notable, but there have been plenty before/since).

Or all sorts of things in /proc/ and /sys/.

And the sheer nastiness of PPID 0.

And ...


If you want a POSIX OS, nommu Linux already isn't it: it doesn't have fork().


Just reading about this...turns out nommu Linux can use vfork(), which unlike fork() shares the parent's address space. Another drawback is that vfork's parent process gets suspended until the child exits or calls execve().


Typicall you always call vfork() + execve(), vfork is pretty useless on its own.

Think about it like CreateProcess() on Windows. Windows is another operating system which doesn't support fork(). (Cygwin did unholy things to make it work anyway, IIRC.)


The common answer here is that they should destroy them instead.


Yes but if they're ever sent over an HTTPS connection that was established using ECDHE key exchange, anyone who recorded that can make it public in the future if quantum computers exist.

On the other hand - we already give our passport information to every single airline and hotel we use. There must be hundreds if not thousands of random entities across the globe that already have mine. As long as certain key information is rotated occasionally (e.g. by making passports expire), maybe it doesn't really matter


> What is the cap of throughput is due to these speed limitations is an exercise left for the author of the article.

They already did that exercise:

> 3-car trains running at 30-40 trains per hour (a normal peak frequency for automated or even some human-driven metro lines) reach a capacity of about 18,000 passengers per hour per direction, well above the expected demand of any American line that doesn’t go through Manhattan.


40 trains per hour is in fact not "normal", but extremely difficult. Only a few systems in the entire world operate more than 30 per hour.

The fundamental constraint is not technology, but people and physics: you need to decelerate and stop, let people disembark and get on, accelerate and clear the platform. This cycle requires a bare minimum of 90 seconds, although IIRC a few lines in a few places like Paris and Moscow do 85 secs.


SEPTA's T [1] gets up to 70 TPH and used to handle 150 TPH. You can do this with multiple trolleys loading/unloading on a platform simultaneously.

(But this strategy is orthogonal to the article, because it requires long platforms.)

[1] https://en.wikipedia.org/wiki/T_(SEPTA_Metro)


Indeed, the Victoria line in London manages 36 TPH and we've not bothered beating it since. It's much easier to run 26-30TPH with slightly more carriages.


> the Victoria line in London manages 36 TPH and we've not bothered beating it since

That was a world record for a line following modern safety standards, set less than 10 years ago. It's hardly a case of "not bothered", it's just hard.


90 seconds is very possible in new-build lines which is what the author is talking about. You can buy a turnkey Innovia (e.g. Vancouver Skytrain) or AnsaldoBreda (e.g Copenhagen) that does this out of the box. Retrofitting 90s operation is basically impossible but not the point of this exercise.


Yes, they are assuming a best-case scenario. Driverless systems are very expensive for reasons that have little to do with the cost of the driverless trains, if you're not going to consider those variables this kind of armchair speculation is a waste of everyone's time.


They aren't though? If you're building a new line, fully driverless is pretty much the default these days, especially if the line is fully underground or elevated.

What is incredibly expensive, though, is retrofitting a line designed for manual operation to run automatically instead.


Well, a lot of systems exist that were initially designed for automatic operation but still end up becoming operated manually or partially manually due to safety concerns or politics. Washington DC Metro and BART are the two big systems I can think of that had this issue.


Both examples of Great Society metros that were on the bleeding edge of what was possible in the early 70s. Automatic train control advanced rapidly, with both Vancouver SkyTrain and London Docklands Light Rail being built in the 80s and operating driverless for their entire existence.

DC Metro just recently re-enabled full automatic train operation across all the lines in June.


I see


  > They already did that exercise
No, they didn't.

They took "30-40 trains per hour" out of thin air and exercise was to calculate whether it is even possible to have more frequent shorter trains.


> They have broken the userspace ABI for lots of libraries again.

If the old ABI used a 32-bit time_t, breaking the ABI was inevitable. Changing the package name prevents problems by signaling the incompatibility proactively, instead of resulting in hard-to-debug crashes due to structure/parameter mismatches.


It isn't inevitable. It's only inevitable if you care about timestamps being correct which for many users of those ABIs doesn't matter too much - they e.g. only care about relative time.

It also isn't strictly necessary until 2038 (depending on your needs for future timestamps) so you'd be creating problems now for people who might have migrated to something else in the 13 years that the current solution will still work for.


Inevitable... for Linux. Other platforms find better solutions. Windows doesn't have any issues like this. The Win32 API doesn't have the epoch bug, 64 bit apps don't have it, and the UNIX style C library (not used much except by ported software) makes it easy to get a 64 bit time without an ABI break.


> Other platforms find better solutions.

Other platforms make different trade-offs. Most of the pain is because on Debian, it's customary for applications to use system copies of almost all libraries. On Windows, each application generally ships their own copies of the libraries they use. That prevents these incompatibility issues, at the cost of it being much harder to patch those libraries (and a little bit of dikspace).

There's nothing technical preventing you from taking the same approach as Windows on Debian: as you pointed out, the libc ABI didn't change, so if you ship your own libraries with your application, you're not impacted by this transition at all.


Personally I only really consider glibc as the system library of Linux (), and that supports both variants depending on compiler flags. Both functions are compiled into glibc, I guess the 32 bit one just wrapping the 64 bit one.

However, other libraries (Qt, Gtk, ...) don't do that compatibility stuff. If you consider those to be also system libraries then yeah, its breaking the ABI of system libraries. Though a pre-compiled program under Linux could just bundle all* of it's dependencies and just either use glibc (probably a good idea), statically link musl, or even do system calls on its own (probably not a good idea). Linux has a stable system call interface!

(*) One can certainly argue about that point. Not sure about that point myself anymore when thinking about it, since there are things like libpcap, libselinux, libbpf, libmount, libudev etc. and I don't know if any of them use time_t anywhere and if they do weather they support the -D_FILE_OFFSET_BITS=64 and -D_TIME_BITS=64 stuff.


All true, but qcnguy's point is valid. If you are distributing .deb files externally from their repo, on the affected architectures you need to have a pre-Trixie version and a Trixie-onward version.


Shipping separate debs is usually the easiest, but not the only solution. It's totally possible to build something that's compatible with both ABIs.


How?

I suppose in theory if there's one simple library that differs in ABI, you could have code that tries to dlload() both names and uses the appropriate ABI. But that seems totally impractical for complex ABIs, and forget about it when glibc is one of the ones involved.

There's no ABI breakage anyway if you do static linkage (+ musl), but that's not practical for GUI stuff for example.

I suppose you could have bundle wrapper .so for each that essentially converts one ABI to the other and include it in your rpath. But again doesn't seem easy for the number/complexity of libraries affected.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: