You can argue about how likely is code like that is, but both of these examples would result in a hard compiler error in Rust.
A lot of developers without much (or any) Rust experience get the impression that the Rust Borrow checker is there to prevent memory leaks without requiring garbage collection, but that's only 10% of what it does. Most the actual pain dealing with borrow checker errors comes from it's other job: preventing data races.
And it's not only Rust. The first two examples are far less likely even in modern Java or Kotlin for instance. Modern Java HTTP clients (including the standard library one) are immutable, so you cannot run into the (admittedly obvious) issue you see in the second example. And the error-prone workgroup (where a single typo can get you caught in a data race) is highly unlikely if you're using structured concurrency instead.
These languages are obviously not safe against data races like Rust is, but my main gripe about Go is that it's often touted as THE language that "Gets concurrency right", while parts of its concurrency story (essentially things related to synchronization, structured concurrency and data races) are well behind other languages. It has some amazing features (like a highly optimized preemptive scheduler), but it's not the perfect language for concurrent applications it claims to be.
Rust concurrency also has issues, there are many complaints about async [0], and some Rust developers point to Go as having green threads. The original author of Rust originally wanted green threads as I understand it, but Rust evolved in a different direction.
As for Java, there are fibers/virtual threads now, but I know too little of them to comment on them. Go's green thread story is presumably still good, also relative to most other programming languages. Not that concurrency in Java is bad, it has some good aspects to it.
Rust has concurrency issues for sure. Deadlocks are still a problem, as is lock poisoning, and sometimes dealing with the borrow checker in async/await contexts is very troublesome. Rust is great at many things, but safe Rust only eliminates certain classes of bugs, not all of them.
Regarding green threads: Rust originally started with them, but there were many issues. Graydon (the original author) has "grudgingly accepted" that async/await might work better for a language like Rust[1] in the end.
In any case, I think green threads and async/await are completely orthogonal to data race safety. You can have data race safety with green threeads (Rust was trying to have data-race safety even in its early green-thread era, as far as I know), and you can also fail to have data race-safety with async/await (C# might have fewer data-race safety footguns than Go but it's still generally unsafe).
in .NET, async/await does not protect you from data races and you are exposed to them as much as you are in Go, but there is a critical difference in that data races in .NET can never result (not counting unsafe) in memory safety violations. They can and will in Go.
While I agree, in practice they can actually be parallel. Case in point - the Java Vert.x toolkit. It uses event-loop and futures, but they have also adopted virtual threads in the toolkit. So you still got your async concepts in the toolkit but the VTs are your concurrency carriers.
Could you give an example to distinguish them? Async means not-synchronous, which I understand to mean that the next computation to start is not necessarily the next computation to finish. Concurrent means multiple different parts of the program may make progress before any one of them finishes. Are they not the same? (Of course, concurrency famously does not imply parallelism, one counterexample being a single-threaded async runtime.)
Async, for better or worse, in 2025 is generally used to refer to the async/await programming model in particular, or more generally to non-blocking interfaces that notify you when they're finished (often leading to the so-called "callback hell" which motivated the async/await model).
If you are waiting for a hardware interrupt to happen based on something external happening, then you might use async. The benefit is primarily to do with code structure - you write your code such that the next thing to happen only happens when the interrupt has triggered, without having to manually poll completion.
You might have a mechanism for scheduling other stuff whilst waiting for the interrupt (like Tokio's runtime), but even that might be strictly serial.
And you would get garbled up bytes in application logic. But it has absolutely no way to mess up the runtime's state, so any future code can still execute correctly.
Meanwhile a memory issue in C/Rust and even Go will immediately drop every assumption out the window, the whole runtime is corrupted from that point on. If we are lucky, it soon ends in a segfault, if we are less lucky it can silently cause much bigger problems.
So there are objective distinctions to have here, e.g. Rust guarantees that the source of such a corruption can only be an incorrect `unsafe` block, and Java flat out has no platform-native unsafe operations, even under data races. Go can segfault with data races on fat pointers.
Of course every language capable of FFI calls can corrupt its runtime, Java is no exception.
> Meanwhile a memory issue in C/Rust and even Go will immediately drop every assumption out the window, the whole runtime is corrupted from that point on.
In C, yes. In Rust, I have no real experience. In Go, as you pointed out, it should segfault, which is not great, but still better than in C, i.e., fail early. So I don't get or understand what your next comment means? What is a "less lucky" example in Go?
> If we are lucky, it soon ends in a segfault, if we are less lucky it can silently cause much bigger problems.
Silent corruption of unrelated data structures in memory. Segfault only happens if you are accessing memory outside the program's valid address space. But it can just as easily happen that you corrupt something in the runtime, and the GC will run havoc, or cause a million other kind of very hard to debug errors.
> But it can just as easily happen that you corrupt something in the runtime, and the GC will run havoc
I would love to see an example of this, if you don't mind. My understanding is that the GC in Go actively prevents against what you write. There is no pointer arithmetic in the language. The worst that can happen is a segfault or data corruption due to faulty locking like the Java example I gave above.
Here is a thread discussing it, but there are multiple posts/comment threads on the topic. In short, slices are fat pointers in the language, and data races over them can cause other threads to observe the slice in an invalid state, which can be used to access memory it shouldn't be able to.
Haskell, Erlang/Elixir, and Rust would save you from most of these problems.
Then, of course, there's the languages that are still so deeply single-threaded that they simply can't write concurrency bugs in the first place, or you have to go way out of your way to get to them, not because they're better than Go but because they don't even play the game.
However, it is true the list is short and likely a lot of people taking the opportunity to complain about Go are working in languages where everything they are so excited to complain about are still either entirely possible in their own favorite language (with varying affordances and details around the issues) or they are working in a language that as mentioned simply aren't playing the game at all, which doesn't really count as being any better.
1) Null pointer derefs can sometimes lead to privilege escalation (look up "mapping the zero page", for instance). 2) As I understand it (could be off base), if you're already doing static checking for other memory bugs, eliminating null derefs comes "cheap". In other words, it follows pretty naturally from the systems that provide other memory safety guarantees (such as the famous "borrow checker" employed by Rust).
UB is in fact not worse than a memory safety issue, and the original question is a good one: NULL pointer dereferences are almost never exploitable, and preventing exploitation is the goal of "memory safety" as conceived of by this post and the articles it references.
> UB is in fact not worse than a memory safety issue
The worst case of UB is worse than the worst case of most kinds of non-UB memory safety issues.
> NULL pointer dereferences are almost never exploitable
Disagree; we've seen enough cases where they become exploitable (usually due to the impact of optimisations) that we can't say "almost never". They may not be the lowest hanging fruit, but they're still too dangerous to be acceptable.
Can I ask you to be specific here? The worse memory corruption vulnerabilities enable trivial remote code execution and full and surreptitious reliable takeovers of victim machines. What's a non-memory-corruption UB that has a worse impact? Thanks!
I know we've talked about this before! So I figure you have an answer here.
> Can I ask you to be specific here? The worse memory corruption vulnerabilities enable trivial remote code execution and full and surreptitious reliable takeovers of victim machines. What's a non-memory-corruption UB that has a worse impact?
I guess just the same kind of vulnerability, but plus the fact that there are no possible countermeasures even in theory. I'm not sure I have a full picture of what kind of non-UB memory-corruption cases lead to trivial remote code execution, but I imagine them as being things like overwriting a single segment of memory. It's at least conceivable that someone could, with copious machine assistance, write a program that was safe against any single segment overwrite at any point during its execution. Even if you don't go that far, you can reason about what kinds of corruption can occur and do things to reduce their likelihood or impact. Whereas UB offers no guarantees like that, so there's no way to even begin to mitigate its impact (and this does matter in practice - we've seen people write things like defensive null checks that were intended to protect their programs against "impossible" conditions, but were optimised out because the check could only ever fail on a codepath that had been reached via undefined behaviour).
I'm sorry, I'm worried I've cost us some time by being unclear. It would be easy for me to cite some worst-case memory corruption vulnerabilities with real world consequences. Can you do that with your worst-case UB? I'm looking for, like, a CVE.
> It would be easy for me to cite some worst-case memory corruption vulnerabilities with real world consequences.
Could you do that for a couple of non-UB ones then? That'll make things a lot more concrete. As far as I can remember most big-name memory safety vulnerabilities (e.g. the zlib double free or, IDK, any random buffer overflow like CVE-2020-17541) have been UB.
Wasn't CVE-2020-17541 a bog-standard stack overflow? Your task is to find a UB vulnerability that is not a standard memory corruption vulnerability, or one caused by (for instance) an optimizer pass that introduces one into code that wouldn't otherwise have a vulnerability.
Cases that are both memory corruption and UB tell us nothing about one being worse than the other. My initial claim in this thread was "the worst case of UB is worse than the worst case of most kinds of non-UB memory safety issues" and I stand by that; if your position is that memory corruption is worse then I'd ask you to give examples of non-UB memory corruption having worse outcomes.
I believe the point is if something is UB, like NULL pointer dereference, then the compiler can assume it can't happen and eliminate some other code paths based on that. And that, in turn, could be exploitable.
Yes, that part was clear. The certainty of a vulnerability is worse than the possibility of a vulnerability, and most UB does not in fact produce vulnerabilities.
Most UB results in miscompilation of intended code by definition. Whether or not they produce vulnerabilities is really hard to say given the difficulty in finding them and that you’d have to read the machine code carefully to spot the issue and in c/c++ that’s basically anywhere in the codebase.
You stated explicitly it isn’t but the compiler optimizing away null pointer checks or otherwise exploiting accidental UB literally is a thing that’s come up several times for known security vulnerabilities. It’s probability of incidence is less than just crashing in your experience but that doesn’t necessarily mean it’s not exploitable either - could just mean it takes a more targeted attack to exploit and thus your Baysian prior for exploitability is incorrectly trained.
But not in reality. For example a signed overflow is most likely (but not always) compiled in a way that wraps, which is expected. A null pointer dereference is most likely (but not always) compiled in a way that segfaults, which is expected. A slightly less usual thing is that a loop is turned into an infinite one or an overflow check is elided. An extremely unusual thing and unexpected is that signed overflow directly causes your x64 program to crash. A thing that never happens is that your demons fly out of your nose.
You can say "that's not expected because by definition you can't expect anything from undefined behaviour" but then you're merely playing a semantic game. You're also wrong, because I do expect that. You're also wrong, because undefined behaviour is still defined to not shoot demons out of your nose - that is a common misconception.
Undefined behaviour means the language specification makes no promises, but there are still other layers involved, which can make relevant promises. For example, my computer manufacturer promised not to put demon-nose hardware in my computer, therefore the compiler simply can't do that. And the x64 architecture does not trap on overflow, and while a compiler could add overflow traps, compiler writers are lazy like the rest of us and usually don't. And Linux forbids mapping the zero page.
> Doesn't null-pointer-dereference always crash the application?
No. It's undefined behaviour, it may do anything or nothing.
> Is it only an undefined-behavior because program-must-crash is not the explicitly required by these languages' specs?
I don't understand the question here. It's undefined behaviour because the spec says it's undefined behaviour, which is some combination of because treating it as impossible allows many optimisation opportunities and because of historical accidents.
Compilers are allowed to assume undefined behavior doesn't happen, and dereferencing an invalid pointer is undefined behavior. You don't have to like it, but that's how it is.
No, it does not always crash. This is a common misconception caused by thinking about the problem on the MMU (hardware) level, where reading a null pointer predictably results in a page fault. If this was the only thing we had to contend with, then yes, it would immediately terminate the process, cutting down the risk of a null pointer dereference to just a crash.
The problem is instead in software - it is undefined behavior, so most compilers may optimize it out and write code that assumes it never happens, which often causes nightmarish silent corruption / control flow issues rather than immediately crashing. These optimizations are common enough for it to be a relatively common failure mode.
There is a bit of nuance that on non-MMU hardware such as microcontrollers and embedded devices, reading null pointers does not actually trigger an error on a hardware level, but instead actually gives you access to the 0 position on memory. This is usually either a feature (because it's a nice place to put global data) or a gigantic pitfall of its own (because it's the most likely place for accidental corruption to cause a serious problem, and reading it inadvertently may reveal sensitive global state).
Only if that memory page is unmapped, and only if the optimizer doesn't detect that it's a null pointer and start deleting verification code because derefing null is UB, and UB is assumed to never happen.
My favorite monospace font is "Ubuntu Mono" for ages.
As an engineer, I like to see -- for the lack of better word -- some taste instead of characters being too formal and too symmetric. Ubuntu and Ubuntu-Mono satisfy this to a good extent without being too much, like in comic sans.
The closest font with similar taste, which I found recently is Mononoki
I've also been using Ubuntu Mono for ages.
Must be 15 years now! I have tried many other fonts, just to see if I'd enjoy a change, but the most jarring thing is how much space all other fonts seem to have. Ubuntu Mono gives me way more lines on screen, without setting the font size far too small. Is the "condensed" property that is being mentioned in this thread? I've asked about this before but nobody has ever said "condensed".
yes, indeed. "condensed" means smaller width and less space between characters, in general. it allows for slightly more code to be shown on the screen horizontally.