Hacker Newsnew | past | comments | ask | show | jobs | submit | duped's commentslogin

They somehow got him doing a cameo on this upcoming Survivor season and it's going to be terrible.

> The video games industry needs to do the same.

Video games are a subset of entertainment which is capped in TAM by the population the game reaches, the amount of money they're willing to spend per hour on average, and average number of hours they can devote to entertainment.

In other words, every dollar you make off a game is a dollar that wasn't spent on another game, or trip to the movies, or vacation. And every hour someone plays your game is an hour they didn't spend working, studying, sleeping, eating, or doing anything else in the attention economy.

What makes this different from other markets is that there is no value creation or new market you can create from the aether to generate 10x/100x/1000x growth. And there's no rising tide to lift your boat and your competitors - if you fall behind, you sink.

The only way to grow entertainment businesses by significant multiples is by increasing discretionary income, decreasing working hours, or growing population with discretionary time and money. But those are societal-level problems that take governments and policy, and certainly not venture capital.


I shudder to think about the impact of concurrent data structures fsync'ing on every write because the programmer can't reason about whether the data is in memory where a handful of atomic fences/barriers are enough to reason about the correctness of the operations, or on disk where those operations simply do not exist.

Also linear regions make a ton of sense for disk, and not just for performance. WAL-based systems are the cornerstone of many databases and require the ability to reserve linear regions.


Linear regions are mostly a figment of imagination in real life, but they are a convenient abstraction and a concept.

Linear regions are nearly impossible to guarantee, unless the underlying hardware has specific, controller-level provisions.

  1) For RAM, the MCU will obscure the physical address of a memory page, which can come from a completely separate memory bank. It is up to the VMM implementation and heuristics to ensure the contiguous allocation, coalesce unrelated free pages into a new, large allocation or map in a free page from a «distant» location.

  2)  Disks (the spinning rust variety) are not that different.  A freed block can be provided from the start of the disk. However, a sophisticated file system like XFS or ZFS, and others like it, will make an attempt do its best to allocate a contiguous block.

  3) Flash storage (SSDs, NVMe) simply «lies» about the physical blocks and does it for a few reasons (garbage collection and the transparent reallocation of ailing blocks – to name a few). If I understand it correctly, the physical «block» numbers are hidden even from the flash storage controller and firmware themselves.
The only practical way I can think of to ensure the guaranteed contiguous allocation of blocks unfortunately involves a conventional hard drive that has a dedicated partition created just for the WAL. In fact, this is how Oracle installation worked – it required a dedicated raw device to bypass both the VMM and the file system.

When RAM and disk(s) are logically the same concept, WAL can be treated as an object of the «WAL» type with certain properties specific to this object type only to support WAL peculiarities.


Ultimately everything is an abstraction. The point I'm making is that linear regions are a useful abstraction for both disk and memory, but that's not enough to unify them. Particularly in that memory cares about the visibility of writes to other processes/threads, whereas disk cares about the durability of those writes. This is an important distinction that programmers need to differentiate between for correctness.

Perhaps a WAL was a bad example. Ultimately you need the ability to atomically reserve a region of a certain capacity and then commit it durably (or roll back). Perhaps there are other abstractions that can do this, but with linear memory and disk regions it's exceedingly easy.

Personally I think file I/O should have an atomic CAS operation on a fixed maximum number of bytes (just like shared memory between threads and processes) but afaik there is no standard way to do that.


I do not share the view that the unification of RAM and disk requires or entails linear regions of memory. In fact, the unification reduces the question of «do I have a contiguous block of size N to do X» to a mere «do I have enough memory to do X?», commits and rollbacks inclusive.

The issue of durability, however, remains a valid concern in either scenario, but the responsibility to ensure durability is delegated to the hardware.

Futhermore, commits and rollbacks are not sensitive to the memory linearity anyway; they are sensitive to durability of the operation, and they may be sensitive to the latency, although it is not a frequently occurring constraint. In the absence of a physical disk, commits/rollbacks can be implemented using the software transactional memory (STM) entirely in RAM and today – see the relevant Haskell library and the white paper on STM.

Lastly, when everything is an object in the system, the way the objects communicate with each other also changes from the traditional model of memory sharing to message passing, transactional outboxes, and similar, where the objects encapsulate the internal state without allowing other objects to access it – courtesy of the object-oriented address space protection, which is what the conversation initially started from.


otoh, WAL systems are only necessary because storage devices present an interface of linear regions. the WAL system could move into the hardware.

That is good evidence that Google is dying because it takes more than one search query to find what you want.

It's really just glibc

It's really just not. GTK is on its fourth major version. Wayland broke backwards compatibility with tons of apps.

Multiple versions of GTK or QT can coexist on the same system. GTK2 is still packaged on most distros, I think for example GIMP only switched to GTK3 last year or so.

GTK update schedule is very slow, and you can run multiple major versions of GTK on the same computer, it's not the right argument. When people says GTK backwards compatibility is bad, they are referring in particular to its breaking changes between minor versions. It was common for themes and apps to break (or work differently) between minor versions of GTK+ 3, as deprecations were sometimes accompanied with the breaking of the deprecated code. (anyway, before Wayland support became important people stuck to GTK+ 2 which was simple, stable, and still supported at the time; and everyone had it installed on their computer alongside GTK+ 3).

Breaking between major versions is annoying (2 to 3, 3 to 4), but for the most part it's renaming work and some slight API modifications, reminiscent of the Python 2 to 3 switch, and it only happened twice since 2000.


The difference is that you can statically link GTK+, and it'll work. You can't statically link glibc, if you want to be able to resolve hostnames or users, because of NSS modules.

You statically linked GTK+ won't work with the users UI styles though and probably won't support everyone's input methods or accessibility tools.

Static linking itself doesn't prevent modules. There's https://github.com/pikhq/musl-nscd for example

Not inherently, but static linking to glibc will not get you there without substantial additional effort, and static linking to a non-glibc C library will by default get you an absence of NSS.

Can't we just freeze glibc, at least from an API version perspective?

The people will complain that glibc doesn't implement what they want.

The solution is simply to build against the oldest glibc version you want to support - we should focus on making that simpler, ideally just a compiler flag.


The problem is not the APIs, it's symbol versions. You will routinely get loader errors when running software compiled against a newer glibc than what a system provides, even if the caller does not use any "new" APIs.

glibc-based toolchains are ultimately missing a GLIBC_MIN_DEPLOYMENT_TARGET definition that gets passed to the linker so it knows which minimum version of glibc your software supports, similar to how Apple's toolchain lets you target older MacOS from a newer toolchain.


Yes, so that's why freezing the glibc symbol versions would help. If everybody uses the same version, you cannot get conflicts (at least after it has rippled through and everybody is on the same version). The downside is that we can't add anything new to glibc, but I'd say given all the trouble it produces, that's worth accepting. We can still add bugfixes and security fixes to glibc, we just don't change the APIs of the symbols.

It should not be necessary to freeze it. glibc is already extremely backwards compatible. The problem is people distributing programs that request the newest version even though they do not really require it, and this then fails on systems having an older version. At least this is my understanding.

The actual practical problem is not glibc but the constant GUI / desktop API changes.


Making an executable “request” older symbol versions is incredibly painful in practice. Basically every substantial piece of binary software either compiles against an ancient Debian sysroot (that has to have workarounds for the ancient part) or somehow uses a separate glibc copy from the base system (Flatpak, etc.). The first greatly complicates building the software, the second is recreating Windows’ infamous DLL hell.

Both are way more annoying than anything the platforms without symbol versioning suffer from because of its lack. I’ve never encountered anyone who has packaged binaries for both Linux and Windows (or macOS, or the BSDs) that missed anything about Linux userspace ABIs when working with another platform.


Is it painful? Why? You need a build environment that has the old libraries. It does not have to be ancient, just exactly what you need.

It has to be as ancient as the oldest glibc you want to support, usually a Red Hat release with very old version and manual security backports. These can have nearly decade-old glibc versions, especially if you care about extended support contracts.

You generally have difficulty actually running contemporary build tools on such a thing, so the workaround is to use —-sysroot against what is basically a chroot of the old distro, as if cross-compiling. But there are still workarounds needed if the version is old enough. Chrome has a shorter support window than some Linux binaries, but you can see the gymnastics they do to create their sysroot in some Python scripts in the chromium repo.

On Windows, you install the latest SDK and pass a target version flag when setting up the compiler environment. That’s it. macOS is similar.


The glibc has to be as ancient as the oldest one you want to support. The rest does not.

If the problem is getting the build tools to work in your old chroot, then the problem is still "people distributing programs that request the newest version", i.e. the build tool developers / packagers. I generally do not have this problem, but I am a C programmer building computational tools.

Look, it’s not that complicated. If you just build your software with gcc or whatever in a docker container with pinned versions, put the binary on your website, and call it a day, 5 minutes later someone is going to complain it doesn’t work on their 3 year old Linux Mint install. The balkanization of Linux is undeniable at this point. If you want to fix this problem without breaking anything else, you have to jump through hoops (and glibc is far from the only culprit).

You can see what the best-in-class hoop jumping looks like in a bunch of open source projects that do binary releases — it’s nontrivial. Or you can see all the machinations that Flatpak goes through to get userspace Mesa drivers etc. working on a different glibc version than the base system. On every other major OS, including other free software ones, this isn’t a problem. Like at all. Windows’ infamous MSVC versioning is even mostly a non-issue at this point, and all you had to do before was bundle the right version in your installer. I’ll take a single compiler flag over the Linux mess every day of the week.

Do you distribute a commercial product to a large Linux userbase, without refusing to support anything that isn’t Ubuntu LTS? I’m kind of doubting that, because me and everyone I know who didn’t go with a pure Electron app (which mostly solves this for you with their own build process complexity) has wasted a bunch of time on this issue. Even statically linking with musl has its futziness, and that’s literally impossible for many apps (e.g. anything that touches a GPU). The Linux ecosystem could make a few pretty minor attitude adjustments and improve things with almost no downside, but it won’t. So the year of the Linux desktop remains illusive.


> The balkanization of Linux is undeniable at this point.

Again this same old FUD.

The situation would be no different if there was only a single distro - you would still need to build against the oldest version of glibc (and other dependencies) you want to support.


In principle you can patch your binary to accept the old local version, though I don't remember ever getting it to work right. Anyway here it is for the brave or foolhardy, here's the gist:

  patchelf --set-interpreter /lib/ld-linux-x86-64.so.2 "$APP"
  patchelf --set-rpath /lib "$APP"

> [...] brave or foolhardy, [...]

Heed the above warning as down this rpath madness surely lies!

Exhibit A: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

Exhibit B: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

Exhibit C: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

Oh, sure, rpath/runpath shenanigans will work in some situations but then you'll be tempted to make such shenanigans work in all situations and then the madness will get you...

To save everyone a click here are the first two bullet points from Exhibit A:

* If an executable has `RPATH` (a.k.a. `DT_RPATH`) set but a shared library that is a (direct or indirect(?)) dependency of that executable has `RUNPATH` (a.k.a. `DT_RUNPATH`) set then the executable's `RPATH` is ignored!

* This means a shared library dependency can "force" loading of an incompatible [(for the executable)] dependency version in certain situations. [...]

Further nuances regarding LD_LIBRARY_PATH can be found in Exhibit B but I can feel the madness clawing at me again so will stop here. :)


Yes you can do this, thanks for mentioning I was interested and checked how you would go about it.

1. Delete the shared symbol versioning as per https://stackoverflow.com/a/73388939 (patchelf --clear-symbol-version exp mybinary)

2. Replace libc.so with a fake library that has the right version symbol with a version script e.g. version.map GLIBC_2.29 { global: *; };

With an empty fake_libc.c `gcc -shared -fPIC -Wl,--version-script=version.map,-soname,libc.so.6 -o libc.so.6 fake_libc.c`

3. Hope that you can still point the symbols back to the real libc (either by writing a giant pile of dlsym C code, or some other way, I'm unclear on this part)

Ideally glibc would stop checking the version if it's not actually marked as needed by any symbol, not sure why it doesn't (technically it's the same thing normally, so performance?).


Ah you can use https://github.com/NixOS/patchelf/pull/564

So you can do e.g. `patchelf --remove-needed-version libm.so.6 GLIBC_2.29 ./mybinary` instead of replacing glibc wholesale (step 2 and 3) and assuming all of used glibc by the executable is ABI compatible this will just work (it's worked for a small binary for me, YMMV).


That's exactly what apgcc from Autopackage provided (20 years ago). https://github.com/DeaDBeeF-Player/apbuild

But compiling in a container is easier and also solves other problems.


We definitely can, because almost every other POSIX libc doesn’t have symbol versioning (or MSVC-style multi-version support). It’s not like the behavior of “open” changes radically all the time, and you need to know exactly what source symbol it linked against. It’s really just an artifact of decisions from decades ago, and the cure is way worse than the disease.

Or just pre-install all the versions on each distro and pick the right one at load-time

This is called "vendoring" and any package manager that doesn't totally suck supports it, including cargo.


This isn't a problem in other languages because most other languages don't have strong, statically typed errors that need to compose across libraries. And those that do have the same problem.

The general argument against adding something to `std` is that once the API is stabilized, it's stabilized forever (or at least for an edition, but practically I don't think many APIs have been changed or broken across editions in std).

The aversion to dependencies is just something you have to get over in Rust imo. std is purposefully kept small and that's a good thing (although it's still bigger and better than C++, which is the chief language to compare against).


I don't know, I've known many people that struggle with exams even if they know the material and even more people that excel with exams that learn nothing. Falling back on any kind of exam is just a recipe for more rote learning and that doesn't create better people (although possibly better readers, which we need).

(Preface: I am not a teacher, and I understand this is a hot take). At the end of the day there's an unwillingness from every level of education (parents, teachers, administrators, school boards, etc) to fight against the assault on intelligence by tech.

I don't think kids should have access to the public internet until they're adults, and certainly should never have it in schools except in controlled environments. Schools could create a private networks of curated sites and software. Parents don't have to give their kids unfettered access to computers. It's entirely in the realm of possibility to use computers and information networks in schools, accessed by children, designed to make it impossible to cheat while maximizing their ability to learn in a safe environment.

We don't build it because we don't want to. Parents don't care enough, teachers are overworked, administrators are inept, and big tech wants to turn them into little consumers who don't have critical thinking and addicted to their software.


Re: test anxiety

I see this line of argument more and more over the last decade and it makes me feel heartless for my opinion.

But if you know the material but cannot apply it in an examination then you either don't actually know the material or don't have the emotional (for lack of better term) control to apply it in critical situations. Both are valid reasons to be marked down.


> don't have the emotional (for lack of better term) control to apply it in critical situations

No, not really, it just means you couldn't apply it in this one particular anxiety-inducing situation.

If someone finds it easier to display their knowledge in a certain way then school should strive to accommodate that as best they can (obviously there are practical limitations to this).

Mental health should be left to mental health professionals because you won't achieve anything by punishing students for their mental health struggles, you just make them hate you, hate school, and make their anxiety even worse.


I would argue that "knowledge" is an almost meaningless concept on its own. What assessments measure is a more complex form of "competency", and the competency of being able to write an essay on a topic is different from the competency of passing an MCQ quiz about it and both are different from being able to apply it in the field.

I don't have a clear solution, other than to have the assessments depend on what we're preparing people for. As an extreme example, I don't care how good of an essay a surgeon or anesthesiologist can write if they can't apply that under pressure.


I'm kind of the opposite, and it concerns me. Not much, just a little.

I react very well in tests and work tasks if I have some level of anxiety. What I want, is to do the same but feeling calm and happy.

I don't want increased cortisone levels to get excellent results.


You're replying to something I didn't say.

But on the topic of test anxiety: I think intentionally causing emotional distress to children for the purposes of making a bad evaluation of their studies is cruel. It's a kind of cycle of trauma - "I did this, so you must to." We use grades to make value judgements of the quality of our children, when what we should be measuring is the ability of our schools to educate them and not how well-educated _the kids are_. The system is backwards, basically, and the fact it causes distress as a side effect is something that _should_ be managed - not ignored.

However anxiety exists and teaching children not to manage it is also bad. One of the really good things I've seen locally is that my school districts (the same that I went through as a child) focus on emotional education at the grade school level much more than when I was a kid, and I notice that the kids have much better emotional regulation than my generation.


This is mostly on the parents.

Children should and must be allowed to fail. In fact, failure is the default outcome most of the time.

I wish I had learned in childhood that doing my best was enough. Not being the best, just doing my best.

But no, this is a lesson I learned from sim racing, as an adult, during the COVID-19 quarantine, as there was not much else to do.

What did I learn from sim racing:

— If I make a mistake, and I keep thinking about that mistake, I will just make more mistakes. Mental recovery, and not punishing myself, is a must. I must go back to mental clarity as fast as possible, to avoid making another mistake.

— Sometimes, doing my best is not enough. It can even be worthless. Other people make mistakes and that will ruin your race. In a long season, this can be offset by consistently good results. “It is possible to commit no mistakes and still lose. That is not a weakness; that is life.” — Jan Luc Picard

— I should not respect this driver because he has a famous last name or so. But I must respect that he did 600 laps preparing for the race. And my respect should be that I also practice as much. Preparation is important, we can't just go to a new track and expect to win. The winner is usually the best combination of general experience and event preparation.

— Nothing feels better than a victory that's hard-earned, against a talented group. Easy victories just feel cheap in comparison.


Apologies I must have misinterpreted what you were getting at.

> I think intentionally causing emotional distress to children for the purposes of making a bad evaluation of their studies is cruel.

Is this ever the intended purpose?


> I've known many people that struggle with exams even if they know the material and even more people that excel with exams that learn nothing.

This point is overstated. The former did not knew the material as well as they think and frankly, unless the exam was super badly done dont exist.

There are some people who fail in stress situation, but not that many of them. If you have met many people like that, you was most likely in a culture where people did not learned well and then blamed inability to test.

But even more importantly, the people who pass tests again and again without learning anything are not a thing. There are some badly designed tests here and there, occasionally. But in most cases, even if the test is not measuring the correct thing, you wont pass it without learning and knowing things.


> But even more importantly, the people who pass tests again and again without learning anything are not a thing.

I simply cannot count the number of times I have to reteach fundamentals to people that must have passed tests on those fundamentals.


Forgetting is normal. And it is not the same thing as never knowing in the first place or learning nothing for the test.

The act of making the flash cards is more important than having them when you've finished.


I disagree, assuming that your goal is being able to recall the backside of the flashcard. Making the flashcards is equivalent to 2 or 3 reviews IMO.


Absoluteky not. Actually having to contruct the flashcards embeds the information in your head to deeper level than 10 reviews could

Same with taking notes in class. You can never look at them again but the most benefit comes from having to organize the information in the first palce


I think it depends on the student, but I think you are probably overall correct. As someone who hated reading most of my textbooks, there is absolutely no way I am going to effectively extract relevant flash card material out of them better than an LLM can. I'm going to get bored and my mind will probably wonder and start thinking about other things while I am "reading".


I assure you that if you have that problem, going through flashcards will be even worst. Flashcards are the most mind numbing boring way to learn.

The goal is not "to produce flashcards". The goal is to know the content. And learning off randomly selected factoids without overall structure is just dumb way to learn.


Writing stuff down by hand is well known to leave a bigger mark in memory than typing, not sure what you’re comparing to


Both can be bad. What's hard to do though is convincing the people that work on these things that they're actively harming society (in other words, most people working on ads and AI are not good people, they're the bad guys but don't realize it).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: