> 1) git will not accept the push because it's not on top of current master branch, person B needs to fetch and merge/rebase before pushing again.
But is this not the right thing to do? A kernel is a complex piece of software. Changes in one place can have very non-obvious consequences in other places (think of changes that cause deadlock because locks are applied in the wrong order). Of course, it is theoretically nice if I know that a change to e.g. documentation or fixing a typo in a comment is not affecting the Ethernet driver or the virtual file system layer, but this is down to the architecture of the project - this is not something that a version control system can prove.
Given that, it seems desirable to me that the source tree has as few different variations, permutations how to get there, and so on, as possible, since this makes testing and things like bisecting for something like a broken lock or another invariant much easier.
I worked for some time at an industrial/embedded company, where in order to build all the software, you had to select "build all" in a menu, and it built everything - more than four million lines of code.
It was a build system which was a pure pleasure to work with, last not least I think because it did not try to solve problems which turn out intractable in the general case.
They are not going to have these supply-chain issues.
> Regulators ultimately approved the plane to return to the air nearly two years after the 2019 crash, but Pierson still doesn’t trust the MAX line — the modernized, *more fuel-efficient version* of Boeing’s predecessor planes.
I think we are witnessing the beginning of the end of the fossil age - and its end will take more than a few companies down with it, which are way too attached to the massive use of fossil fuels.
> Not to mention multiple orders of magnitude safer than driving to the airport.
I guess that's counting accidents per distance. Which is a weird metric if you compare a vehicle that is going some 560 miles per hour with one that is going not even 55 mph on average.
By that metric, using a space shuttle is safer than commuting by bicycle in Europe (which is not only extremely safe if you wear a helmet, cycling will statistically make your life longer instead of shorter, in spite of the accident risk, so large are the health advantages..)
If I need to go to a place 1600 km away and can choose between a car or a plane then fatal accident per km is a metric I would use to see what would be a probability of a fatal accident in either case.
Why does this reminds me in this big, extremely profitable company that made something every American needs in a while, which seems to have abandoned all sanity in their processes? Looks like Intel and Boeing are on a similar path....
I know some people browse the web while gaming, but I don't. For the gaming use case, I legit want a toggle that says "yes, all the code I'm running is trusted, now please prioritize maximum performance at all costs." For all I care this mode can cut the network connection since I don't do multiplayer.
I imagine people doing e.g. heavy number crunching might want something similar.
I think that a lot of the heat in the debate between different error handling strategies, and whether Exceptions, functional return types, or whatever are the best way, boils down to that there are very, very different requirements.
In some sectors and domains, it is not a problem if a program fails and crashes. And it is also not a problem if a change in error handling makes an internal library backwards-incompatible (*). In such environments, errors are not costly, and it is more important to quickly write code, than that it is always correct. This is even more valid in distributed computing environments which have redundancy.
In other domains, a crash could be deadly (like in an autonomous car) or could cost millions of dollars (like with the Ariane rocket) - and backwards-incompatible changes in libraries might be not tolerable neither (for example, in industrial automation, where core libraries are used for 20+ years).
(*) One point to consider which might not be obvious to most people: When extending a function, which has, say, an enumeration as one argument, it is always possible to extend it by adding new enumeration values to its input parameters, just like one can add keyword arguments with a default value. This modification is always backwards compatible.
But backwards compatibility breaks if one adds enumeration values, or new kinds of errors, or new Exception types to the *output parameters* of a function - because the code of its clients must now be modified to handle all possible cases. If one takes semantic versioning seriously, this is a breaking change. And one can argue that in a statically-typed language used for safety-critical systems, the compiler should always catch that.
> This is the major problem with most comparisons of config file formats: the actual semantic domain of a config file format is extremely limited [ ... ]
Scheme lacks most syntactic affordances that imply semantics. Even if some of those implications are dead wrong, they're still useful.
Personally I think the right answer for configuration files is to define them in terms of a generic object model. A program could even support multiple formats (TOML+JSON+YAML). If a user dislikes all the supported formats or the file is generated with something like NixOS, it can be handled with straightforward conversion.
Point taken, but it would seem the problem there is probably due to the arbitrary placed mixins? My proposal was more for a single configuration object, just in whatever actual syntax you (or your team) prefers. If someome runs away with that and writes json that includes yaml that includes python that generates configuration from what it found on the filesystem, responsibility for that needless complexity rests squarely on the shoulders of the new programmer.
Having syntactic affordances for every nuance of semantics is what led to the current state of zoo. What is wrong with having trivial syntax and distinguishing semantics by labeling parts of the syntax tree with symbols?
Specifically with scheme, I believe the main problem is the symbols lack any distinction to know the difference between a function call and a syntactic form.
(Don't shoot the messenger, I've done my fair share of scheme. I've also done a lot of thinking about why some people are so turned off from the syntax, and it's certainly not that the opening parenthesis is in a slightly different place on the prefix function calls)
But is this not the right thing to do? A kernel is a complex piece of software. Changes in one place can have very non-obvious consequences in other places (think of changes that cause deadlock because locks are applied in the wrong order). Of course, it is theoretically nice if I know that a change to e.g. documentation or fixing a typo in a comment is not affecting the Ethernet driver or the virtual file system layer, but this is down to the architecture of the project - this is not something that a version control system can prove.
Given that, it seems desirable to me that the source tree has as few different variations, permutations how to get there, and so on, as possible, since this makes testing and things like bisecting for something like a broken lock or another invariant much easier.