> I sympathize with the situation that Zed developers are in. They are thinking of the user experience first and foremost, and when trying to distribute on Linux, faced with an overgrown, chaotic landscape that utterly fails to provide the basic needs of application developers, such as the ability to distribute a binary that has no dependencies on any one particular distribution and can open a window and interact with the graphics driver, or the ability to require permissions from the user to do certain things.
But Linux does provide a very simple and easy way to do this — Flatpaks. They're completely distro-independent, allow you to package up and distribute exactly the dependencies and environment your program needs to run with no distro maintainers fucking with it, allow you to request permission to talk to the graphics drivers and anything else you need, and you can build it and distribute it directly yourself without having to go through a million middlemen. It's pretty widely used and popular, and has made the silent majority of Linux users' lives much better, although there's a minority of grognards that complain endlessly about increased disk usage.
Maybe I'm just old-fashioned, but I don't like Flatpak (or Snap or AppImage). They still don't seem to have solved all the desktop integration issues. I do not like running apps that bundle their own dependencies, because I don't trust the app developers to be on top of security issues. I trust Debian maintainers (despite mistakes in the past) to keep my system's base libraries up to date and patched. Why would I trust some random developers of some random app to do the same?
> Maybe I'm just old-fashioned, but I don't like Flatpak (or Snap or AppImage).
That's certainly your prerogative, and I hope traditional distro packages stick around — I think they will, since they are the basis of so much fundamental infrastructure. And I'm sure there will be a cottage industry of converting flatpaks to .debs or .RPMs in the future if flatpaks become the dominant way of distributing GUI software :)
> They still don't seem to have solved all the desktop integration issues.
They haven't solved all of the issues yet, but while snaps and appimages are still struggling mightily, flatpaks seem to be making pretty good progress on that front, at least if you stick with modern Electron (not the old version Discord has!), QT, and GTK applications. And I think generally all of the issues are solvable, and not only that, but solving them will leave the Linux desktop in a much better place than it was before, because we can build in broker-based sandbox permissions, and things like making each GUI toolkit automatically use the native file-picker of the user's desktop environment (something GTK4 and Qt5 support via the relevant Flatpak portal).
> I don't trust the app developers to be on top of security issues. I trust Debian maintainers (despite mistakes in the past) to keep my system's base libraries up to date and patched. Why would I trust some random developers of some random app to do the same?
I understand where you're coming from here and this is a common objection to sandbox packaging solutions, but I think there are a few problems with it.
First of all, Dependabot exists: all maintainers of Flatpaks need to do to keep their dependencies up-to-date is enable it for their application repository and then just keep an eye out for emails from the bot and approve the automated pull request when those emails show up. You can do it all from your smartphone! I've done it. Importantly, there would be absolutely no need to manually patch system libraries or backport patches, or any of that nonsense, if we didn't adhere to the distribution model of packaging, because then there would be no delay in releasing libraries, you could just get the libraries directly from upstream, and there would be no point releases or anything of the sort. So a lot of the very appreciated and difficult work that distribution maintainers have to do every day is work that is made necessary by the model of distribution in the first place. So yes, we'd be expecting application maintainers to keep their dependencies up to date, but that job would itself become much easier.
You might say that part of the distribution maintainers' job is to actually inspect library updates from upstream to find vulnerabilities or whatever, but there are far too many packages and dependencies for them to actually do that. I very highly doubt they are actually trawling through all of the code to try to spot vulnerabilities, and that seems like a job best left to the far greater number of much more knowledgeable eyes directed at open source libraries upstream.
This model doesn't just eliminate a lot of unnecessary work either — it distributes the workload; now, instead of one team having to break themselves to keep every system library up to date, everyone shares the burden of keeping the libraries they use up to date. This does open up the possibility of lazy application developers not pressing the "fix my dependencies" button, to be sure, but the amount of dependency hell and cross-distribution portability problems that packaging dependencies with applications solves I think outweighs that concern. Security isn't the only consideration here, there's also other practical considerations. Otherwise, we'd all be using Qubes xP
Furthermore, it should be noted that many of the larger dependencies of Flatpaks, at least, are handled through platforms and platform extensions and SDKs, where bundles of interrelated dependencies are actually separate packages from the application Flatpaks, and thus can be updated by upstream independently. The key with them is just that they, too, like regular applications, become insoluble independent of distribution, and capable of being maintained by upstream as a result, and you can also install multiple versions of them if necessary.
In the end, I think it's a trade-off. But I seriously don't think dynamic linking and having to keep all of the versions of every package on your operating system in perfect lockstep to keep them all using the same version of a dependency, tying your system library versions and app versions and OS versions itself into one big tangled ball of interdependency, where you can't upgrade application B because it shares a dependency with application A and would require a newer version than application A knows how to use if you upgraded it, and having to continually backport security patches from newer versions of that dependency to the version that your system is still in lockstep with is a sustainable and sensible model, especially because of how much work it foists on one single team.
I appreciate all your comments in this thread. I wasn't aware of how competitive Flatpak was and I still haven't played with the technology - but I am more interested in it now.
Also for the record, I wouldn't have complained about them primarily linking to a Flatpak. It seems like a perfectly reasonable alternative to a shell script installation.
It seems to me the most neutral one is AppImage.
Flatpak being the favorite of “not-Ubuntu” people and Snap being only preferred by Ubuntu…but still having a huge user base due to their enormous market share.
Incidentally, I stumbled upon this post a few months ago and was deeply frustrated by it. The depiction of anyone who doesn't use this author's preferred tool for text editing as stubborn backwards idiots too set in their ways to change, who simply don't know what this author's favorite tools could offer, reeks of condescension and hubris. It also ignores that many people switch from IDEs to things like NeoVim and Emacs, so it can't just be people who've been using those editors for years and are too set in their ways to change — although I'm sure this author would have a snide dismissal for those people too, like accusing them of being hipsters that "just want to seem cool and hacker-like."
Meanwhile, the reality is that many of us who use these text editors are fully aware of the power that IDEs can provide. For example, I regularly use Android Studio for Android app development precisely because, for that kind of extremely complex build system and heavyweight language and framework, with that much boilerplate and that many forms to fill out, IDEs are hard to beat.
But, and this is what this person ignores, IDE do come with trade-offs: greater complexity, resource usage, higher lag and startup times, and inferior pure text editing tools (and no, a vim plugin won't help). Most importantly, with a full-on IDE, you typically lose a whole lot of flexibility — IDEs typically are much less configurable and adaptable to different workflows and new features, or even just various customizations that a user used to a regular text editor might want to make. IDEs are also typically highly coupled with the language and the build system they are designed for, such that they typically only work for a specific language or two. If you regularly use multiple languages for your job or hobby projects, then if you wanted to use a full-fat IDE, you'd have to install multiple versions, or somehow wrangle the IDE into supporting all of those languages simultaneously, which is a difficult prospect. Not to mention that it would be awkward and burdensome to attempt to use a full-fat IDE for something like word processing or note-taking because of how targeted they are at a specific language and its build environment, whereas a classical text editor is great at those things, because that is part of what they were designed to do before WYSIWYG word processors came along.
Meanwhile, in comparison, a text editor like NeoVim or Emacs can be equipped with 70% of a full IDEs capabilities through things like (using emacs packages here just because it's what I know) vterm, magit, treesit, lsp-mode, and dap-mode, as well as per-language modes for the build system and such if needed, and in return, even a relatively slow and heavy weight text editor like Emacs will give you less resource usage, better performance, and much greater flexibility, both in customization and in the ways that you can use it. And you can see that this is something that plenty of people actually want, instead of just some stubborn and dyed in the wool people who've been using vim or Emacs since the 70s or hipsters that want to use something old school for cool points, because there are plenty of brand-new IDE-lite text editors, like Visual Studio Code and Zed, that people flock to in huge numbers! I certainly think if anyone uses vim or emacs completely vanilla with no completion and code actions and stuff like that, they may in fact be just a stubborn old timer, but that isn't the majority of people using either of those editors anymore, most use them as an IDE-lite experience just like VSC.
But most importantly, I think what the author of this post is fundamentally missing is that not all languages require a heavyweight IDE: in some languages and build environments you can reach productivity equivalent to an IDE user's just using a standard text editor, if your build environment doesn't require the equivalent of filing and submitting a stack of tax forms to function properly and your language isn't the equivalent of bureaucratic legalese. The full power of IDEs is only really made necessary by a certain type of problem and environment that isn't universal. IDEs are like huge, powerful construction equipment, sometimes you just don't need something like that.
I don't really care for the social aspect of goodreads, I just like to find other people's reviews and have a meta-data rich way to track what I've read and my TBR, and for that OpenReads is incredible. Pretty damn featureful, fetches metadata and reviews from various sources really well, beautiful modern Material You UI. Highly recommended
Setting aside dogmatic complaints ahout failing to adhere to the purity tests of a hokey old religion from the seventies, in what way is Linux actually worse? Also, assuming you're referring to it getting worse in the usual ways *nix grognards like to complain about, how is any of that "in the same ways proprietary stuff is bad"? If anything Windows for instance leans even more heavily on backwards compatibility than old Linux, not less.
I know I'm going to lose points for pushing back on the pervasive HN RH-bashing, but this doesn't hold up. Wayland is being pushed by Redhat to make GNOME the only stable option? Except that Wayland is perfectly fine (many would argue better than it is under GNOME!) on Sway, KDE, etc. KDE has more bugs than GNOME, but that's independent of Wayland, it's just because KDE's design philosophy is "hardcode every feature and option imaginable" and that leads to it being impossible to QA. Anyway, this is just conspiracy theory bullshit. I swear to god Red Hat is the Soros of Linux for a certain type of guy
Watching them a long time, too many coincidences. Looks like fire-and-motion, make yourself the standard then make it hard to deviate or to keep up. If it’s not intentional, it’s incredibly damn convenient.
Why do any of these standards make it harder for other distributions and desktop environments to keep up with them? wlroots exists, and in many people's minds is much better than GNOME with Wayland. This really strange thinking.
Also, it isn't them that are making them the standard. It's independent distributions choosing to use what they produce (and not all of them do, either). Presumably, the maintainers and packagers who make those choices would be aware of these technical considerations, and capable of rejecting Red Hat's tech I'm favor of whatever hoary stack you prefer if it made it harder for them to "keep up." That seems to be something that is perfectly possible to do while still producing a usable distro, and it seems like something limit distributions are quite good at — ignoring what corporate operating systems are doing and forging their own path. Maybe it's because the technologies you are labeling as Red Hat technologies actually offer substantial improvements and push forward the cutting edge of the Linux desktop in a meaningful way, bringing it closer to the capabilities of a modern operating system?
So a [gigantic meta analysis](https://www.sciencedirect.com/science/article/pii/S014976342...) of thirty years of studies on sexed structural and functional differences in human brains found zero evidence of any differences, a completely overlapping distribution, but as soon as "big data AI" is used suddenly not only are there differences, there's literally zero overlap? Count me suspicious. I think I'm going to trust the meta-analysis of 30 years worth of wide-ranging scientific study over the brand new study that's just throwing whatever fad is currently in the vogue at the problem to see what happens.
I have to assume you haven’t bothered reading this, because section 2.3 points out flaws in the methodology of the studies looked at which the methodology in this study kind of tries to address (whether or not it’s a good job of it is left for everyone else to figure out). You shouldn’t dismiss a result out of hand because it doesn’t fit preconceived notions, but it’s absolutely a reason to try to dig in to the methodology of the new study and make sure it’s not flawed.
That said, this meta-analysis is also filled with some crazy statements. It seems to imply sexual dimorphism is only really visible in repro organs but women necessarily need to have wider hips to facilitate child birth among other differences.
This obvious point should have also been noted when comparing differences in organ mass, since mothers of babies with larger heads are more likely to die so this is selected against. Not an issue with lungs, heart etc., hence larger % differences in sexes there.
These aren’t egregious omissions in and of themselves, but it’s certainly useful context I’d like to have were I not familiar with sexual dimorphism.
The dismissiveness of a 1.6 fold increase in SDN size of human males compared to human women is bad. That’s enormous! Not something I would prepend with “only” and repeatedly call “small”, even when not comparing the differences between M/F humans and M/F rodents.
Bizarre that none of the authors objected to this phrasing, because it’s poisoned reading the rest of this paper for me. How am I meant to trust the authors’ opinion of what a “small” difference is?
Some of the points are a bit more compelling, like in section 5.1 where they point out that a difference attributed to M/F was replicated in much smaller size by concentrating on volume instead, or in 5.2 where they point out a few papers that missed crucial nuance.
But overall after reading a few thousand words of this, the nicest thing I can say about it is that I agree that it is indeed gigantic.
Update: had a friend with access send me a PDF of the study and looked through it. It seems that the big breakthrough is only half AI — the other half being looking directly at time series of fMRIs instead of static images with features in them manually selected for relevance, because how the various circuits in the brain operate and circulate over time is important information. Also they got this to replicate well with the same people at different times, and also generalize to two other cohorts, consistently, and also used XAI to check what the AI was keying off, to make sure it wasn't going off something nonsensical, and directly used those features with success as well. It seems like an extremely carefully controlled and designed study tbh.
Without making any claims about gender or non-binary people (not my wheelhouse, I simply don't know), there's ample evidence to suggest statistically significant population-level differences between males and females on a many cognitive measures.
I don't see how it's surprising that an new generation of signal-detection tool finds population-level differences in the brains that produce these cognitions.
I think the linked news article is a little misleading, although I share your skepticism. I'd like to see these results replicated rigorously on still new sets of data by independent researchers; I wouldn't be surprised either way, if the results did or did not replicate.
However, the news article seems to spin this as "male and female brains are totally different entities that bear no relationship with one another." Although I haven't reread it carefully, it seems like the article is saying something more like "you can identify gender-specific patterns, and those gender-specific components relate to things like cognitive ability gender-specifically". It's not that you can't find overlap — that that wasn't the focus of the study — it's that if you go looking for differences, you can find them.
It seems to me that in order for male and female brains to be functionally the same, they would need to be physically different to account for the extreme hormonal differences.
When you give a man a female dose of hormones or a woman a male dose of hormones, it has a very big effect on their mood, behavior, and mental wellbeing. This change is much, much bigger than the average diffences we see between men and women. For example, an average man with an average woman's level of testosterone will experience a MUCH higher level depression, listlessness, and sexual disinterest than the average woman experiences.
This strongly implies that human brains must correct for these huge hormonal differences. Basically, in order for male and female behavior to be similar, their brains must differ. If their brains are the same, then hormones will have a much, much bigger influence on male and female behavior than what we actually see in reality.
Hormone-correcting brain differences would also imply that it's possible for people to be born with some type of intersex brain condition, and that these individuals would benefit greatly from receiving hormone therapy to bring their hormone levels in line with their brains. And this, indeed, seems to be something we see occasionally.
(In case anyone cares or thinks it is relevant, I wish to note that I am a cisgender woman and I do not think that there are huge innate differences in men's and women's mentality — certainly nothing like on the level that testosterone/estrogen/etc levels would predict. I think most of the differences we do see are environmental, which is why these gaps have been closing in recent history — or widening in some cases/locations. Based on these trajectories, I suspect that men and women are actually FAR more similar than anyone natively groks, and that we exaggerate or invent small differences due to a biological hyperfixation with sex. Note that we don't obsess about the mental differences between, say, male and female cats, even though they have much greater sexual dimorphism than humans do.)
> For example, an average man with an average woman's level of testosterone will experience a MUCH higher level depression, listlessness, and sexual disinterest than the average woman experiences.
Is this true even if they were to have an average woman's level of estrogen? It may be that the brain needs either set of hormones to work effectively, and doesn't work well when lacking both (of course, gender transition HRT aims for this, but pretty much all undergoing it are trans and so aren't a good indication of average reaction to hormones for their chromosomal sex. And a cis person undergoing the same HRT isn't likely to enjoy the process)
When your study contradicts reality, your study is wrong. That there are major mental difference between men and women is obvious to anyone who interacts with both.
If your study can't find those differences it's your study that's wrong, not that the differences don't exist.
That's interesting. So if a study is counter intuitive to "common sense," it's the study that's wrong? Especially since they're talking about structural and functional neurological differences, whereas the "everyday common sense" differences you're gesturing at could be due to other things, so they aren't necessarily in contradiction.
> So if a study is counter intuitive to "common sense," it's the study that's wrong?
Possibly, yes. But "common sense" is a very weak way of describing things here, because common sense is usually used to describe intuition, not knowledge. We are talking here about knowledge - men and women are different mentally, this isn't something unknown, or something requiring a study.
> since they're talking about structural and functional neurological differences
Given that there are obvious mental differences, and that most people don't believe in duality, that would imply there have to be neurological differences, if your study didn't find them, your study is faulty. Either that, or it's evidence for dualism.
> We are talking here about knowledge - men and women are different mentally, this isn't something unknown, or something requiring a study.
The whole point of science is to double-check this sort of thing, the stuff "everyone knows" that seems obvious to casual observation, like that the earth is flat. Plus you're simply asserting and re-asserting that this is capital-K Knowledge, without providing any reasons, just banging the podium. That's not very convincing. This is anti-intellectualism.
> Given that there are obvious mental differences, and that most people don't believe in duality, that would imply there have to be neurological differences, if your study didn't find them, your study is faulty. Either that, or it's evidence for dualism.
Or maybe the differences are very social and contextual — men and women's brains largely operate the same way and are structured the same, but give different outputs because they're given different inputs — and the impacts of those social and contextual inputs are simply too fine-grained to show up.
If you make a study that finds that plants don't need water to grow your study is wrong. It's like that here, this isn't one of those things that needs extra evidence or debate.
> This is anti-intellectualism.
And what epithet would you give to those who pretend there is no difference?
> Or maybe the differences are very social and contextual
This is certainly an interesting idea (and one I've heard before). But you have to prove it. You can't just declare "I found no differences, therefor it must be a social difference".
The standard of evidence to declare this is very high, it requires actual proof not just a well reasoned argument.
And you will have to somehow explain all the evidence against this - for example in nordic countries they found that if they give the household enough money the wife prefers to stop working, and stay home. This went against their idea that men and women should be paid the same, and supported identically, because then women will want to work. Instead they found women actually preferred to be home, and they only wanted to work if they had to (i.e. not enough money). Men however had the opposite result.
So their goal of equal employment would require them to reduce support. I'm not sure what the government chose to do.
> are simply too fine-grained to show up.
Then your study is faulty and you should declare "invalid study", not "valid study we found nothing".
It’s not the first study to find differences. And that’s not the only meta study in existence. I don’t see why one would be surprised given other literature around the subject.
BTW, note how the article you cited doesn’t argue against differences between male and female brains but makes a rather pedantic point about “dimorphism”.
On the one hand, it is true that Emacs is more of a Lisp runtime with a text-oriented Lisp Machine GUI application development toolkit and a powerful shell (or Perl?)-level standard library for manipulating files, directories, text, and other processes built in, with an editor as a mere example application, and that as a result, its capabilities are far broader and more powerful than just a simple text editor, and that this is borne out in both how the community tends to use it, and in the ecosystem of tools and applications around it.
On the other, I think this meme of dismissing it out of hand for that is just... Wrong. It's perfectly possible to use emacs like it's just a very good text editor and nothing more if that's how you want to use it; I used it that way for years, because the operating system aspects don't have to intrude on your use of it at all if you don't want them to, unlike IDEs. And it isn't bigger or more resource hungry in comparison to many modern editors like Kate for those extra capabilities it offers. They aren't even loaded into memory unless you use them, and when they aren't loaded, they're stored as gzipped plain text. So when the installed size of the "emacs" meta-package from Debian is 51 kilobytes, and the install size of the "kate" meta-package from Debian is 8,000 kilobytes, and the largest memory usage I've ever seen Emacs as a text editor take up on my computer is 230 megabytes, which is almost certainly smaller than Kate, and Emacs having copious capabilities outside of being a strictly focused text editor doesn't really have to affect how you use it day to day if you don't want those features, then the only way in which this "bloat" could serve as a reason to dismiss emacs is on the basis of dogmas, since it "being an OS" doesn't make it use any more resources than something you're perfectly happy to accept. Like the people that dismiss emacs on this basis can't really seem to outline how it affects them that someone else could use Emacs as an IRC client. They just seem to be motivated by this abstract obsessive compulsive need to adhere to a certain philosophy — that using a program with many features or the capability to have many features that they don't use would bother them as a matter of principle for some reason.
And the kicker of the matter is that the direction in which this person is trying to force Kate is exactly the way that Emacs users use it that gets it accused of being an operating system: extending it to have functionality beyond that of a pure text editor into the domain of note-taking and agenda management and reminders and so on. So clearly you do want an "OS," not a text editor, anyway! But for some reason you're doing it with a bunch of hacky Python and shell scripts that are fragile and difficult to set up and don't integrate very well with the editor and offer a tiny fraction of the features that the real versions in Emacs do. (For instance, that agenda view is just a static printing of TODO items. You can't click on them to jump to where they are in their file or toggle their progress state in the agenda view or clock in and out of them from that view or search through them based on tags or their content or their status or due date, and it'd be very difficult to integrate such functionality into Kate).
I'm not saying that this person should switch to emacs or even that they are in any way obligated to check it out if they are happy with Kate and it allows them to get the work they want to do done. I'm just saying that despite how sort of technically true this saying is, when it's used as a reason to dismiss emacs, it's just kind of dogmatic and shortsighted and that makes me kind of sad.
My problem with your article is that it seems to operate on the misconception that someone must completely understand from top to bottom the entire stack of abstractions they operate atop in all of its gory and granular detail in order to get anything done, and so a larger tower of abstractions means a higher mountain to climb up in order to do something.
But that's simply the opposite of the case: the purpose of abstractions — even bad abstractions — is detail elision. Making it so that you don't have to worry about the unnecessary details and accidental complexity of a lower layer that don't bear on the task you have at hand and instead can just focus on the details and the ideas that are relevant to the task you are trying to achieve. Operating on top of all of these abstractions actually makes programming significantly easier. It makes getting things done faster and more efficient. It makes people more productive.
If someone wants to make a simple graphical game, for instance, instead of having to memorize the memory map and instruction set of a computer, and how to poke at memory to operate a floppy drive with no file system to load things, they can use the tower of abstractions that have been created on top of hardware (OS+FS+Python+pygame) to much more easily create a graphical game without having to worry about all that.
Yes, the machine and systems underneath the abstractions are far more complex, and so if you set out to try to completely fit all of them in your brain, it would be much more difficult than fitting the entirety of the Commodore 64 in your brain, but that greater complexity exists precisely to free the higher layers of abstraction from concerns about things like memory management and clock speeds and so on.
So it's all very well and good to want to understand completely the entire tower of abstractions that you operate atop, and that can make you a better programmer. But it's very silly to pretend that everyone must do this in order to be remotely productive, and that therefore this complexity is inherently a hindrance to productivity. It isn't. Have we chosen some bad abstractions in the past and been forced to create more abstractions to paper over those bad abstractions and make them more usable, because the original ones didn't allied out the right details? Yes, absolutely we have. I think a world in which we abandoned C Machines and the paradigm where everything is opaque black box binaries that we control from a higher level shell but have no insight into, and instead iterated on what the Lisp Machines at the MIT AI Lab or D-Machines at Xerox PARC were doing, would be far better, and would allow us to achieve similar levels of ease and productivity with fewer levels of abstraction. But you're still misunderstanding how abstractions work IMO.
Also, while I really enjoy the handmade movement, I have a real bone to pick with permacomputing and other similar movements. Thanks to influence from the UNIX philosophy, they seem to forget that the purpose of computers should always be to serve humans and not just serve them in the sense of "respecting users" and being controllable by technical people, but in the sense of providing rich feature sets for interrelated tasks, and robust handling of errors and edge cases with an easy to access and understand interface, and instead worship at the feet of simplicity and smallness for their own sake, as if what's most important isn't serving human users, but exercising an obsessive-compulsive drive toward software anorexia. What people want when they use computers is a piece of software that will fulfill their needs, enable them to frictionlessly perform a general task like image editing or desktop publishing or something. That's what really brings joy to people and makes computers useful. I feel that those involved in permit computing would prefer a world in which, instead of GIMP, we had a separate program for each GIMP tool (duplicating, of course, the selection tool in each as a result), and when you split software up into component pieces like that, you always, always, always necessarily introduce friction and bugs and fiddliness at the seams.
Maybe that offers more flexibility, but I don't really think it does. Our options are not "a thousand tiny inflexible black boxes we control from a higher later" or "a single gigantic inflexible black box we control from a higher layer." And the Unix mindset fails to understand that if you make the individual components of a system simple and small, that just pushes all of the complexity into a meta layer, where things will never quite handle all the features right and never quite work reliably and never quite be able to achieve what it could have if you made the pieces you composed things out of more complex, because a meta-layer (like the shell) always operates a disadvantage: the amount of information that the small tools it is assembled out of can communicate with it and each other is limited by being closed off separate things as well as by their very simplicity, and the adaptability and flexibility those small tools can present to the meta layer is also hamstrung by this drive toward simplicity, not to mention the inherent friction at the seams between everything. Yes, we need tools that are controllable programmatically and can communicate deeply with each other, to make them composable, but they don't need to be "simple."
> Operating on top of all of these abstractions actually makes programming significantly easier. It makes getting things done faster and more efficient. It makes people more productive.
Only if those abstractions are any good. In actual practice, many are fairly bad, and some are so terrible they not only fail to fulfill their stated goals, they are downright counterproductive.
And most importantly, this only works up to a point.
> Yes, the machine and systems underneath the abstractions are far more complex, and so if you set out to try to completely fit all of them in your brain, it would be much more difficult than fitting the entirety of the Commodore 64 in your brain, but that greater complexity exists precisely to free the higher layers of abstraction from concerns about things like memory management and clock speeds and so on.
A couple things here. First, the internals of a CPU (to name but this one) has become much more complex than before, but we are extremely well shielded from it through its ISA (instruction set architecture). Some micro-architectural details leak through (most notably the cache hierarchy), but overall, the complexity exposed to programmers is orders of magnitudes lower than the actual complexity of the hardware. It's still much more complex than the programming manual of a commodore 64, but it is not as unmanageable as one might think.
Second, the reason for that extra complexity is not to free our minds from mundane concerns, but to make software run faster. A good example is SIMD: one does not simply auto-vectorise, so to take advantage of those and make your software faster, there's no escaping assembly (or compiler intrinsics).
Third, a lot of the actual hardware complexity we do have to suffer, is magnified by non-technical concerns such as the lack of a fucking manual. Instead we have drivers for the most popular OSes. Those drivers are a band-aid over the absence of standard hardware interfaces and proper manuals. Personally, I'm very tempted to regulate this as follows: hardware companies are forbidden to ship software. That way they'd be forced to make it usable, and properly document it. (That's the intent anyway, I'm not clear on the actual effects.)
> I think a world in which we abandoned C Machines and the paradigm where everything is opaque black box binaries that we control from a higher level shell but have no insight into, and instead iterated on what the Lisp Machines at the MIT AI Lab or D-Machines at Xerox PARC were doing, would be far better, and would allow us to achieve similar levels of ease and productivity with fewer levels of abstraction.
I used to believe in this idea that current machines are optimised for C compilers and such. Initially we could say they were. RISC came about explicitly with the idea of running compiled programs faster, though possibly at the expense of hand-written assembly (because one would need more instructions).
Over time it has become more complicated though. The prime example would again be SIMD. Clearly that's not optimised for C. And cryptographic instructions, and carry-less multiplications (to work in binary fields)… all kinds of specific instructions that make stuff much faster, if you're willing to not use C for a moment.
Then there are fundamental limitations such as the speed of light. That's the real reason between cache hierarchies that favour arrays over trees and pointer chasing, not the desire to optimise specifically for low(ish)-level procedural languages.
Also, you will note that we have developed techniques to implement things like Lisp and Haskell on stock hardware fairly efficiently. It is not clear how much faster a reduction machine would be on specialised hardware, compared to a regular CPU. I suspect not close enough to the speed of a C-like language on current hardware to justify producing it.
But Linux does provide a very simple and easy way to do this — Flatpaks. They're completely distro-independent, allow you to package up and distribute exactly the dependencies and environment your program needs to run with no distro maintainers fucking with it, allow you to request permission to talk to the graphics drivers and anything else you need, and you can build it and distribute it directly yourself without having to go through a million middlemen. It's pretty widely used and popular, and has made the silent majority of Linux users' lives much better, although there's a minority of grognards that complain endlessly about increased disk usage.