Hacker Newsnew | past | comments | ask | show | jobs | submit | more logicprog's commentslogin

> iOS-type (GNOME, Pantheon).

I really don't understand why people keep pushing this misconception. GNOME certainly doesn't have to be for everyone, but this canard is needlessly dismissive and based on a childishly superficial apprehension of GNOME. Just because there is a full screen menu, and the desktop is uncluttered by extranious UI clutter so you can just focus on your task and use the keyboard to control windows and navigate, that makes it "iOS like"? At that point what does that even mean?

If anything, I'd say GNOME is far closer to a tiling window manager. Again, I'm not saying this workflow has to be for you, but it is a consistent, desktop class workflow of its own: a keyboard- and virtual desktop-centric workflow focused on filling the screen with applications via full screens or splits so you can focus on your work, with no extraneous UI elements made obsolete by virtual desktops that only stick around because people are used to them like taskbars, docks, and minimization buttons. The logic of this is excellent IMHO. There is no need for a taskbar to manage minimized or occluded windows if you can just banish windows to other virtual desktops to get them out of the way, but still see them immediately in a multidesktop expose view and fuzzy window search, and new desktops will be fluidly created for you as you fill them up; no minimization button for the same reason; no need for a dock or pinned application icons because an extremely powerful spotlight-style fuzzy application launcher (with built in app menu search, calculator, web search, file search, definition finder, etc) and a dock, combined with a very useful expose and workspace overview conveniently arranged there for you in a single screen, is a single click or key tap away, etc. It seems to me people just assume that since GNOME doesn't have things they're used to from UIs they associate with "work", it's must be a "toy."


I don’t disagree with you, but I believe I know why there is antipathy against GNOME. GNOME 3 isn’t directly inspired by iOS, but it came about during a time when some people thought that traditional Windows- and Mac-like desktops were outmoded and that the industry should move toward more mobile-friendly user interfaces. This led to Windows 8, Ubuntu’s Unity, and the gradual inclusion of iOS elements in macOS. Generally detractors of GNOME 3 are also detractors of these desktops.

Alongside other controversial changes in Linux such as systemd and Wayland, I think some of the antipathy toward GNOME 3 is caused by resentment and the strong network effects these projects have. If GNOME had little influence over other projects, these strong feelings would’ve been much weaker. However, GNOME is one of the most influential projects in the FOSS desktop computing ecosystem, with major consequences even for people who don’t use GNOME. For example, GTK over the years went from a generic toolkit to a much more GNOME-specific one, with consequences for developers and users of GTK-based software who were not fond of GNOME’s major changes beginning with GNOME 3. GNOME also drove the adoption of systemd and Wayland, which is also a source of consternation.

Yes, GNOME developers have the freedom to do what they want with their own software. But there is resentment by some users regarding GNOME’s influence and how the goals of GNOME, Red Hat, and other major players in the Linux ecosystem don’t always align with the Unix philosophy. Change is hard for some people to accept, especially changes people feel are for the worst, whether or not they are actually worse.


One thing that I think that Unity got more right than others was how not only did it not try to sweep its menubars into hamburger menus, but in fact leaned into menubars with its design and the HUD, which instantly made every program more keyboard-accessible even for those who didn’t know shortcuts. More DEs should copy that aspect.


Please excuse this incredibly long post. Your post was insightful and polite, so I wanted to respond as thoughtfully as I think it deserves! :D

---

The knee-jerk reaction to the new GNOME interface was, I agree, understandable at the time, given the Zeitgiest — if you're in a time when you're seeing a lot of interfaces move towards being less usable in your eyes, in favor of chasing largely wishful thinking, you might well react strongly to the smallest sign of it in other places. However, I think this reaction was ultimately misguided at the time, and is even moreso now, more maintained by resentment that has long since reached the point of being self-sustaining than anything else, which is what I felt I was responding to originally.

The concessions to touch oriented devices in the design in GNOME 3 are mostly superficial, skin deep changes that are perfectly usable on the desktop and also perfectly explicable by other means then focusing on mobile, and more than made up for by a focus on keyboard navigation (a uniquely desktop-oriented trait).

For instance, if you're scrolling through your applications looking for one, you're not going to have your attention on anything else in the screen, so why does the menu need to take up a small amount of the screen, instead of taking up the whole screen and being able to show you more results?

Or, similarly, if we look at title bars, all GNOME did was essentially merge the tab bar, action button bar, and title bar into one — a move that necessitated larger title bars to maintain button sizes in line with those in the application window itself, which makes sense, since they now ARE application buttons, but in return unified a number of often stacked UI bars with overlapping purposes into a single cohesive entity, probably overall decreasing the amount of space taken up by the header area of many applications overall. In light of this change, moving menus into a hamburger also makes perfect sense: if every application now has an action bar, all the common menu bar actions will be covered there, arguably more discoverable and convenient than before, so being forced to make room for those action buttons by putting the rest of the menus behind a universally understood, common icon (adding at most one click, and likely actually being a wash, since hamburger menus in GNOME tend to be flatter), is a small price to pay, especially since, again, it also helps merge two bars into one, actually saving space.

The network effects criticism seems like a much more reasonable one to me, but I still don't find it particularly convincing, and I suspect it often serves as rationalization for self perpetuating resentment and fear of change.

First of all, I don't see the problem with having various projects take advantage of each other's functionality where it makes sense, and in fact I think one of the major things that was holding the Linux desktop back is the heretofore all too common focus on making everything a completely generic and replaceable component that doesn't really integrate well with anything else or fully use the features of anything else. Instead, I think it's a very good thing to have a relatively unified, well integrated Linux platform all the way from the kernel up to the DE, as long as it does allow for things to be switched out within reason, has a unidirectional flow of dependencies downward (so no kernel depending on window manager), each platform component is managed as a separate entity with relatively defined interfaces, and allows for alternative platforms to exist. Indeed I think the fear of a unified platform stack being merely extant or popular, as if its mere existence will make dependency chains invert and our systems completely brittle and unyielding, or as if that will erase the existence of alternative Linux distributions that use different stacks (how could you put a stop to that anyway? I think the Gentoo and Void people are far too hardy a breed for that <3), is I think based on paranoia and conspiratorial thinking.

In fact, secondly, I also think the network effect is greatly overstated — it's pretty clear that the lower level systems in the FreeDesktop stack can work perfectly well without the higher level ones, meaning as you approach closer and closer to the parts of the system that actually affect users, modularity grows more and more; and even the higher level ones can often work perfectly well without the lower ones — for instance, Wayland and GNOME both work perfectly well without systemd as far as I know, considering that they both run fine on for instance Void Linux. So the FreeDesktop stack does indeed meet the criteria I mentioned in the first point.

Thirdly, I think the opposition to the specific programs that make up the FreeDesktop (let's call it) stack is mostly based on hidebound traditionalism and fear of change, an almost cargo cult like devotion to the "UNIX philosophy" (most strongly to be seen amongst the suckless crowd, who judge the quality of software by how few lines it contains, irrespective of its problem domain and features, and set themselves arbitrary line limits). Conversely, I tend to believe that "doing one thing well" is oft-misunderstood by that crowd, since IMO it often requires completely and holistically solving a problem from first principles, instead of writing a beautiful, minimal art piece in C that solves only about 80% of the problem (myopically defined) and does so by relying on a zoo of tools that only half work for that solution (like relying on shell scripts for an init system) and leaves the lost 20% of edge cases and nice to have features and the rest of the surrounding problem unsolved so that you have to cobble together a whole another zoo of tools to solve it, and the fear of doing this is mostly just a cargo cult devotion to a just one out of many great software traditions. In essence, I subscribe more to the "GNU philosophy", and agree with Rob Pike when he said that the Unix philosophy is dead and Perl killed it.

Fourth, even if there was a tight interconnection between those platform components, that doesn't really constitute a network effect, just a dependency chain: a network effect would be some means of keeping distro maintainers or users — depending on which perspective you take — on that stack as opposed to any other, but as far as I know there's not really much of that at all, it just so happens that that stack is the most polished and featureful for users, and the most powerful and easy to maintain for distro maintainers and system administrators. It seems perfectly possible to switch to an entirely different one if you so choose, as demonstrated by distributions like Void Linux (a real best of breed for its type if distro imo, it looks pretty damn nice if you wanna graybeard it IMO).

[Aside: this is especially the case since someone who objects to one part of the stock is likely to object to all of them, and so having to give up the others in order to give up the one they most strongly object to isn't likely going to be any big misery for them. Someone who doesn't like systemD or Wayland probably wouldn't want to use GNOME anyway, at most they probably be using MATE.]

So really it doesn't seem like this complaint is about network effects at all, but about the very idea of dependencies between things, which just goes back to my comment before about this obsession with modularity to the detriment of making a featureful, reliable system that is actually capable of taking advantage of its own strong points. For instance, why shouldn't an init system take advantage of the unique features and capabilities of its kernel? The idea that everything should be written in is generic manner is so strange to me. Complaints about that, and just about the fact that the free desktop platform dominates the Linux world, but that's more a function of its quality and the amount of work that goes into it, in my opinion.

--- Signed, a happy Fedora Silverblue + nushell + Emacs user.


Thank you for your response. As someone who leans more traditionalist in my computing preferences, this is one of the most thorough and thought-provoking defenses of the GNOME and modern Linux ways of designing systems I’ve read.


“iOS like” is not meant to be an insult. I’ve spent a considerable amount of time using GNOME and I find that iPadOS is its closest analogue — to me, it feels like what you’d get if someone were tasked with taking iPadOS and making it workable for desktop usage. There are a great deal of similarities between the two.


Fair enough! I've never used iPad OS so I wouldn't really have any means of evaluating that it, but the way you frame it seems perfectly reasonable, since it seems like you're granting that GNOME is at least adapted for desktop use. I definitely have seen that statement leveled in a dismissive way, though, quite possibly by people who have never used iPad OS or even regular iPhone OS, so I was assuming it was that again :P


> "iOS like"? At that point what does that even mean?

optimized for touch screens


> Honestly I’m really tempted to try to throw together a 90s style fantasy desktop environment and widget library and make some apps for it. There’s something about that era of computing that feels great.

If you're interested in that, you should definitely check out Serenity OS!


The store will be fine.


Barney's new york is now closed, it was not fine.


How about he steals your suit next, you'll likely be fine as well.


How very Christian. We're all "slaves to sin" and "idols" where "sin and idols" just means "anything people like doing", because it's hard to stop doing something you like.


It means anything that destroys and leads to misery. Hopefully you see that someday.


How very droll and condescending of you. The Golden Rule is to treat others how you'd like to be treated, so I can assume you want to be treated in a droll and condescending matter in turn, so allow me to oblige:

A: What is sin?

B: Anything that destroys and leads to misery.

A: Do X or Y actually destroy or lead to misery, or is it moral panic, pseudoscience, misery more caused by socially enforced guilt caused by the belief that X or Y is bad, some combination of the above?

[B and A engage in an interminable discussion, and eventually B's Heritage Foundation links are exhausted]

A: it doesn't seem clear that any of these things "destroy" anything, or inherently cause misery.

B: Well X or Y must be bad, because they're sin, so they seem gross and bad to me. There must be hidden detriments that scientists are unwilling to admit, or just haven't gotten around to proving yet. Just wait, when you grow up you'll think just like me.

A: maybe.


You sound like a sad sexless man.


I am as much of a rust shill as you'll ever meet, but I agree that there is something beautiful and alluring and simple and engaging about C that few other languages match. It's basically an advanced macro assembler for an abstract machine, so there's all of the allure of using 6502 or 68000 assembly language but with none of the portability problems, and a vast ecosystem of libraries and amazing books to back it up.


I've enjoyed writing a few projects in x86-64 assembly as well, for what it's worth. Even though I'm sure that any C compiler would generate better assembly than my handwritten one. Flat assembler is great, by the way.


Any C compiler can generate better assembly for a function. But there's often some whole program optimizations that you can make, which the C compiler isn't allowed to do (because of the ABI/linker).

For example, a Forth interpreter can thread its way through "words" (its subroutines) with the top-of-stack element in a dedicated register. This simplifies many of the core words; for example, "DUP" becomes a single x86 push instruction, instead of having to load the value from the stack first. And the "NEXT" snippet which drives the threaded execution can be inlined in every core word. And so on.

You can write a Forth interpreter loop in C (I have), and it can be clever. But a C compiler can't optimize it to the hilt. Of course it may not be necessary, and the actual solution is to design your interpreted language such that it benefits from decades of C compiler optimizations, but nevertheless, there are many things that can be radically streamlined if you sympathize with the hardware platform.


> "Memory unsafe languages" is maybe one percent of one percent of the problem.

Multiple distinct large scale software projects have found that 60-70% of severe CVEs are due to memory safety violations[1]. The White House has called for projects to use memory safe languages [2]. The Android Project has seen an incredibly substantial drop in security vulnerabilities concurrent with their rapid shift to using memory safe languages in New code, with the correlation being so tight and the number of vulnerabilities having been so constant before that they are forced to conclude that memory safe languages have helped[3]. So your claim that memory unsafe languages are maybe 1% of 1% of the problem is not only completely unsubstantiated, but almost certainly false given all of the available information.

And your jab presumably at Rust for being a "fad" similarly holds no water. It is the only language that has actually offered a practical means of eliminating memory safety violations at compile time, statically, without needing a runtime or garbage collector or to give up zero cost abstractions, meaning that is the only relatively memory safe language with a solid shot at working in the fields where C and C++ were ordinarily used. That really doesn't seem like a fad to me. Or stupid.

As I've said many a time, this sort of denial often seems like the cantankerous lashing out of someone who doesn't want to learn something new and can't be bothered to look past the occasionally superficially annoying antics of a community to see the actual technical merits of the software, and perhaps even can't stand to be confronted with the fact that their hard won knowledge in a needlessly difficult language might eventually be less in demand than it was before, and whose fragile elitist self-mythology about being better than everyone else because they can "write C code without making mistakes" is in danger of collapsing under the weight of evidence that it is a delusion.

[1]: https://alexgaynor.net/2020/may/27/science-on-memory-unsafet... [2]: https://www.whitehouse.gov/oncd/briefing-room/2024/02/26/pre... [3]: https://security.googleblog.com/2022/12/memory-safe-language...


>60-70% of severe CVEs are due to memory safety violations...So your claim that memory unsafe languages are maybe 1% of 1% of the problem is not only completely unsubstantiated, but almost certainly false given all of the available information

Based on your tone and profile you probably aren't interested in better information here, but I'll offer anyway.

The vast majority of cybersecurity attacks, and especially the vast majority of actual incidents, don't involve CVEs. Think recent breaches at Okta, Microsoft, Uber, even SolarWinds a few years ago.

When CVEs do come into play, they're as likely to be logic flaws as anything else. Think Sandworm, Log4Shell, that Apache Struts vulnerability that Equifax didn't patch.

And when memory safety problems do bubble up, well, there are still a bunch of issues that existing memory-safe languages don't actually address. They're real, flawed tools made by real, flawed people, not magic.

So, big picture, memory-safe languages shouldn't be a top-10 priority for very many teams at this point. Maybe someday, though.


> Based on your tone and profile you probably aren't interested in better information here, but I'll offer anyway.

You might be surprised :) I come across as emotional online because I'm genuinely invested in things, not because I'm looking to flame, which means I'm actually far more likely than the average person to change my mind if given good arguments, because I'm actually putting my views on the line. I reacted as harshly as I did to the GP mostly because they didn't actually substantiate anything, just made a broad unsubstantiated claim and followed it up with annoying dismissive rhetoric.

On the other hand, this is useful information, and I appreciate you providing something to make this discussion more interesting. Your input certainly does make the tradeoff picture more complex, and I'm certainly a fan of letting each team decide what moves are best for it, and agree that moving to memory safe languages is a long term thing (or a thing for greenfield projects), not an imminant emergency.

Nevertheless, I still think it's more important than to deserve a dismissive "one percent of one percent, we should keep starting new projects in C." For one thing, CVEs might be more important than you let on, since while yes, most breaches are the result of social engineering and such, not bad code per se, we as tech people can only really control what our tech does, do we should focus on breaches that result from technical faults — and wouldn't that usually be CVEs? And if we're looking at CVEs, it seems like we're just looking at things from different perspectives: I'm looking at how to decrease the volume of critical CVEs, and seeing a handy majority of them are caused by memory safety violations that can be fixed with essentially a tooling change, which seems like a big win to me, whereas you're looking at what CVEs end up being exploited, and saying that since it's about 50/50 logic bugs, memory safety doesn't matter. To me, it seems like the ratio of which ones actually get exploited is probably a bit arbitrary, because which ones get used and which don't probably is, so we should focus on just minimizing how many we produce at all, though. Moreover even at 50/50 or so, it seems worthwhile to deal with memory safety, especially since again dealing with it just requires using tools that stop you from creating those bugs, whereas trying to "solve" programmers creating logic bugs is like cold fusion unless you wanna program in Coq (and even then...). As for there being a bunch of other issues memory safe languages can't fix... sure, but why not deal with what we can deal with? It sort of seems like whataboutism.


> they had no path forward

This is I think the premise that you and people like me who think Amiga could have gone on to do great things disagree on, I think. Most Amiga fans would say that it totally had a path forward, or at least there is no evidence that it didn't, and the failure to follow that path therefore it wasn't an inherent technical problem, but a problem of politics and management. Do you have any evidence to the contrary?


As someone trying to get into Amiga retro competing as a hobby in today's day and age, I find it keeping all the different types of ram straight very confusing lol


> Amiga was only better 1985-1988. By 1987 PC and Mac caught up and never looked back.

Oh indubitably! I don't think even the most committed Amiga fan, even the ones that speculate about alternate histories, would deny that at all.

The thing is, though, that only happened because Commodore essentially decided that since it had so much of a head start, it could just rest on its laurels and not really innovate or improve anything substantially, instead of constantly pushing forward like all of its competitors would do, and so eventually the linear or even exponential curve of other hardware manufacturers' improvements outpaced its essentially flat improvement curve. So it doesn't seem like IBM PCs and eventually even Macs outpacing the power of Amiga Hardware was inevitable or inherent from the start.

If they had instead continued to push their lead — actually stuck with the advanced Amiga chips that they were working on before it was canceled and replaced with ECS for instance — I certainly see the possibility of them keeping up with other hardware, and eventually transitioning to 3D acceleration chips instead of 2D acceleration chips when that happened in the console world, eventually perhaps even leading to the Amiga line being the first workstation line to have the gpus, and further cementing their lead, while maintaining everything that made Amiga great.

Speculating even further, as we are seeing currently with the Apple M-series having a computer architecture that is composed of a ton of custom made special purpose chips is actually an extremely effective way of doing things; what if Amiga still existed in this day and age and had a head start in that direction, a platform with a history of being extremely open and well documented and extensible being the first to do this kind of architecture, instead of it being Apple?

Of course there may have been fundamental technical flaws with the Amiga approach that made it unable to keep up with other hardware even if Commodore had had the will; I have seen some decent arguments to that effect, namely that since it was using custom vendor-specific hardware instead of commodity hardware that was used by everyone, they couldn't take advantage of the cross-vendor compatibility like IBM PCs, could and also couldn't take advantage of economies of scale like Intel could, but who knows!


The thing with Commodore was that as a company it was just totally dysfunctional. The basically did little useful development between C64 and the Amiga (the Amiga being mostly not their development). The Amiga didn't sell very well, specially in the US.

The company was going to shit after the Amiga launched, it took a competent manager to save the company and turn the Amiga around into a moderate success.

Commodore didn't really have money to keep up chip development. They had their fab they would have need to upgrade that as well, or drop it somehow.

Another example of that is the Acorn Archimedes. Absolutely fucking incredibly hardware for the price. Like crushing everything in price vs performance. But ... literally launched with a de-novo operating system with 0 applications. And its was a small company in Britain.

The dream scenario is for Sun to realize that they should build a low cost all costume chip device. They had the margin on the higher end business to support such a development for 2-3 generations and to get most software ported to it. They also had the software skill to make the hardware/software in a way that would allow future upgrades.


Imagining Sun buying Amiga and making it a lower end consumer workstation to pair with its higher end ones, with all the much-needed resources and interesting software that would have brought to the Amiga is a really cool thought experiment!


Sun did actually approach Commodore to license its technology for low end work station. However the Commodore CEO at the time declined for unknown reasons.

I don't know what Sun had planned for this tech.

A even more interesting approach for Sun would have been to cooperate or acquire Acorn. The Acorn Archimedes was an almost perfect low end work station product. Its incredible weakness was its lack of OS and it total lack of applications.

Acorn spend an absolutely absurd amount of money to try to get the OS and application on the platform. They spend 3 years developing an new OS, and then realized that this was going nowhere. So they rushed out another new OS. And then they realized that nobody want to buy a machine with a compromise OS and no application. So they had to put up huge effort to try to fix that. The company simply couldn't sustain that kind of effort on the Software side while at the same time building new processors and new machines. Its surprising what they achieved but it wasn't a good strategy.

Had they just adopted SunOS (BSD) it would have been infinitely better for them. And for Sun to release new high and and low end RISC workstations at the same time would have been an absolute bomb in the market.

Even if you added all the bells and whistles to the system (Ethernet, SCSI, extra RAM), you could be very low priced and absolutely blow pretty much every other system out of the water.


That's really interesting information!

Re Acorn though — As much better from a market perspective as buying Acorn and releasing RISC- and BSD-based low-end workstations might have been for Sun, I still prefer to imagine a world where the Amiga's unique hardware and software got to live on — perhaps with compatibility layers to run Sun software, but nevertheless preserving a UNIX-like but still non-UNIX OS lineage and non-generic-PC hardware lineage.


From retrogaming talks from former Commodore engineers, the issues were more political and management than technical alone.


That's kind of typical, though, isn't it? When a company falls off, it's almost always not just technical.


That's definitely how it seems to me, which is why I focused on Commodores poor management decisions first and only mentioned the possible technical issues second


> kinda wish this was unpacked a bit more, why exactly is a service executable dynamically linking to a library without using any of its symbols or functions, because of systemd.

If I recall correctly based on what I've read about this — I believe from the original mailing list post that noticed the vulnerability — it's because under certain circumstances in order to enable certain functionality you might want sshd to be able to talk to systemd, so distros often patch sshd with code to do that. But obviously you need a library to implement actually speaking system's protocol, and as it happens, the easiest way to do that is to include the entirety of libsystemd, since it has functions for doing that, even though there are at least two other libraries that implement just the communication functionality and are actually designed for non-systemd programs to use. The problem with that idea being that libsystemd, being a whole standard library for all systemd-related functionality that is probably mostly designed for use by programs in the tightly integrated systemd family and as a reference imenentation, also includes a lot of other code, including code that has to deal with compression that depends on liblzma, even though all of that is never used by sshd, because it only uses the small subsection of the library it needs.


> But obviously you need a library to implement actually speaking system's protocol

That is overstatement. The docs have basic self-contained example how to implement the notification without libraries, its 50 lines, majority of which is just error handling:

https://www.freedesktop.org/software/systemd/man/devel/sd_no...


The example was only recently added (like 1-2 days ago). Before that it was only really explained and said it was stable (and guarnteed as stable API).


Oh, cool! I was just trying to give the maximum benefit of the doubt, but this is good to know. The systemd hate never seems as justified as it'd like to be...


> even though there are at least two other libraries that implement just the communication functionality and are actually designed for non-systemd programs to use.

The important question is: if one of those libraries is used, and then something else pulls in `libsystemd`, will they conflict?


Why would they? There isn't any magic in libsystemd, its just a normal c lib


A lot of systemd-replacement shims try to be transparent, which means exporting the same symbols as the real systemd libs and thus causing weird conflicts if you link to the real systemd libs too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: