I saw that in Swift, a method can declare it throws an exception, but it doesn't (can't) declare the exception _type_. I'm not a regular user of Swift (I usually use Java - I'm not sure what other languages you are familiar with), but just thinking about it: isn't it strange that you don't know the exception type? Isn't this kind of like an untyped language, where you have to read the documentation on what a method can return? Isn't this a source of errors itself, in practise?
> isn't it strange that you don't know the exception type?
Java experience taught us that, when writing an interface, it is common not to know the exception type. You often can’t know, for example, whether an implementation can time out (e.g. because it will make network calls) or will access a database (and thus can throw RollbackException). Consequently, when implementing an interface, it is common in Java to wrap exceptions in an exception of the type declared in the interface (https://wiki.c2.com/?ExceptionTunneling)
Yes I know Java and the challenges with exceptions there (checked vs unchecked exceptions, errors). But at least (arguably) in Java, the methods (for checked exceptions at least) declares what class the exception / exceptions is. I personally do not think wrapping exceptions in other exception types, in Java, is a major problem. In Swift, you just have "throws" without _any_ type. And so the caller has to be prepared for everything: a later version of the library might suddenly return a new type of exception.
One could argue Rust is slightly better than Java, because in Rust there are no unchecked exceptions. However, in Rust there is panic, which is in a way like unchecked exceptions, which you can also catch (with panic unwinding). But at least in Rust, regular exceptions are fast.
> And so the caller has to be prepared for everything: a later version of the library might suddenly return a new type of exception.
But you get the same with checked exceptions in Java. Yes, an interface will say foo can only throw FooException, but if you want to do anything when you get a FooException, you have to look inside to figure out what exactly was wrong, and what’s inside that FooException isn’t limited.
A later version of the library may suddenly throw a FooException with a BarException inside it.
What I liked about Bosst's error_code[1], which is part of the standard library now, is that it carties not just the error but the error category, and with it a machinery for categories to compare error_codes from other categories.
So as a user you could check for a generic file_not_found error, and if the underlying library uses http it could just pass on the 404 error_code with an http_category say, and your comparison would return true.
This allows you to handle very specific errors yet also allow users to handle errors in a more generic fashion in most cases.
I say limited because the compiler doesn't (yet, as of 6.2) perform typed throw inference for closures (a closure that throws is inferred to throw `any Error`). I have personally found this sufficiently limiting that I've given up using typed throws in the few places I want to, for now.
Typed exceptions are unlike typed parameters or return values. They don’t just describe the interface of your function, but expose details about its implementation and constrain future changes.
That’s a huge limitation when writing libraries. If you have an old function that declares that it can throw a DatabaseError, you can’t e.g. add caching to it. Adding CacheError to the list of throwable types is an API breaking change, just like changing a return type.
Swift has typed errors now, but they shouldn’t be used carefully, and probably not be the default to reach for
I don't think it's strange at all--my main uses of the returned errors are
1a. yes, there was some error
1b. there was an error--throw another local error and encapsulate the caught error
2. treat result of throwing call as `nil` and handle appropriately
I don't think typed throws add anything to the language. I think they will result in people wasting time pondering error types and building large error handling machines :)
When I used Java, I found typed exceptions difficult to reason about and handle correctly.
I find floating point NaN != NaN quite annoying. But this is not related to Rust: this affects all programming languages that support floating point. All libraries that want to support ordering for floating point need to handle this special case, that is, all sort algorithms, hash table implementation, etc. Maybe it would cause less issues if NaN doesn't exist, or if NaN == NaN. At least, it would be much easier to understand and more consistent with other types.
You have a strongly ordered `NotNan` struct that wraps a float that's guaranteed to not be NaN, and an `OrderedFloat` that consideres all NaN equal, and greater than non-NaN values.
These are basically the special-cases you'd need to handle yourself anyway, and probably one of the approaches you'd end up taking.
I agree. In my opinion NaNs were a big mistake in the IEEE 754 spec. Not only they introduce a lot of special casing, but also consume a relatively big chunk of all values in 32 bit floats (~0.4%).
I am not saying we do not need NaNs (I would even love to see them in integers, see: https://news.ycombinator.com/item?id=45174074), but I would prefer if we had less of them in floats with clear sorting rules.
I wonder if "any code that would create a NaN would error" would suffice here. I don't think it makes sense when you actually start to implement it, but I do feel like making a NaN error would be helpful. Why would you want to handle an NaN?
If you don't handle NaN values, and there are NaNs in the real observations made for example with real sensors that sometimes return NaN and outliers, then the sort order there is indeterminate regardless of whether NaN==NaN; the identity function collides because there isn't enough entropy for there to be partial ordering or total ordering if multiple records have the same key value of NaN.
How should an algorithm specify that it should sort by insertion order instead of memory address order if the sort key is NaN for multiple records?
That's the default in SQL Relational Algebra IIRC?
Well each programming language has a "sort" method that sorts arrays. Should this method throw an exception in case of NaN? I think the NaN rules were the wrong decision. Because of these rules, everywhere there are floating point numbers, the libraries have to have special code for NaN, even if they don't care about NaN. Otherwise there might be ugly bugs, like sorting running into endless loops, data loss, etc. But well, it can't be changed now.
The best description of the decision is probably [1], where Stephen Canon (former member of the IEEE-754 committee if I understand correctly) explains the reasoning.
Were the procedures for handling Null and Null pointers well defined even for C in 1985 when IEEE-754 was standardized?
There's probably no good way to standardize how to fill when values are null or nan. How else could this be solved without adding special cases for NaN?
In a language with type annotations we indicate whether a type is Optional:
Well floating point operations never throw an exception, which I kind of like, personally. I would rather go in the opposite direction and change integer division by zero to return MAX / MIN / 0.
But NaN could be defined to be smaller or higher than any other value.
Well, there are multiple NaN. And NaN isn't actually the only weirdness; there's also -0, and we have -0 == 0. I think equality for floating point is anyway weird, so then why not just define -0 < 0.
I don't think it's odd statement. It's not about segfaults, but use-after-free (and similar) bugs, which don't crash in C, but do crash in Fil-C. With Fil-C, if there is such a bug, it will crash, but if the density of such bugs is low enough, it is tolerable: it will just crash the program, but will not cause an expensive and urgent CVE ticket. The bug itself may still need to be fixed.
The paragraph refers to detecting such bugs during compilation versus crashing at runtime. The "almost all programs have paths that crash" means all programs have a few bugs that can cause crashes, and that's true. Professional coders do not attempt to write 100% bug-free code, as that wouldn't be efficient use of the time. Now the question is, should professional coders convert the (existing) C code to eg. Rust (where likely the compiler detects the bug), or should he use Fil-C, and so safe the time to convert the code?
Doesn't Fil-C use a garbage collector to address use-after-free? For a real use-after-free to be possible there must be some valid pointer to the freed allocation, in which case the GC just keeps it around and there's no overt crash.
Yes, Fil-C uses some kind of garbage collector. But it can still detect use-after-free: In the 'free' call, the object is marked as free. In the garbage collection (in the mark phase), if a reference is detected to an object that was freed, then the program panics. Sure, it is also possible to simply ignore the 'free' call - in which case you "just" have a memory leak. I don't think that's what Fil-C does by default however. (This would be more like the behavior of the Boehm GC library for C, if I understand correctly.)
Ok, you are right. My point is, yes it is possible to panic on use-after-free with Fil-C. With Fil-C, a life reference to a freed object can be detected.
I'm not sure what you mean. Do you mean there is a bug _in the garbage collection algorithm_, if the object is not freed in the very next garbage collection cycle? Well, it depends: the garbage collection could defers collection of some objects until memory is low. Multi-generation garbage collection algorithm often do this.
> it will just crash the program, but will not cause an expensive and urgent CVE ticket.
Unfortunately, security hysteria also treats any crash as "an expensive and urgent CVE ticket". See, for instance, ReDoS, where auditors will force you to update a dependency even if there's no way for a user to provide the vulnerable input (for instance, it's fixed in the configuration file).
I agree security issues are often hyped nowadays. I think this is often due to two factors: (A) security researches get more money if they can convince people a CVE is worse. So of course they make it sound extremely bad. (B) security "review" teams in software companies do the least amount of work, and so it's just a binary "is a dependency with a vulnerability used yes/no" and then force the engineering team to update the dependency, even thought its useless. I have seen (was involved) in a number of such cases. This is wasting a lot of time. Long term, this can mean the engineering team will try to reduce the dependencies, which is not the worst of outcomes.
Yes, safety got more important, and it's great to support old C code in a safe way. The performance drop and specially the GC of Fil-C do limit the usage however. I read there are some ideas for Fil-C without GC; I would love to hear more about that!
But all existing programming languages seem to have some disadvange: C is fast but unsafe. Fil-C is C compatible but requires GC, more memory, and is slower. Rust is fast, uses little memory, but us verbose and hard to use (borrow checker). Python, Java, C# etc are easy to use, concise, but, like Fil-C, require tracing GC and so more memory, and are slow.
I think the 'perfect' language would be as concise as Python, statically typed, not require tracing GC like Swift (use reference counting), support some kind of borrow checker like Rust (for the most performance critical sections). And leverage the C ecosystem, by transpiling to C. And so would run on almost all existing hardware, and could even be used in the kernel.
These might all be slower than well written C or rust, but they're not nearly the same magnitude of slow. Java is often within a magnitude of C/C++ in practice, and threading is less of a pain. Python can easily be 100x slower, and until very recently, threading wasn't even an option for more CPU due to the GIL so you needed extra complexity to deal with that
There's also Golang, which is in the same ballpark as java and c
You are right, languages with tracing GC are fast. Often, they are faster than C or Rust, if you measure peak performance of a micro-benchmark that does a lot of memory management. But that is only true if you just measure the speed of the main thread :-) Tracing garbage collection does most of the work in separate threads, and so is often not visible in benchmarks. Memory usage is also not easily visible, but languages with tracing GC need about twice the amount of memory than eg. C or Rust. (When using an area allocator in C, you can get faster, at the cost of memory usage.)
Yes, Python is specially slow, but I think it's probably more because it's dynamically typed, and not not compiled. I found PyPy is quite fast.
I've built high load services in Java. GC can be an issue if it gets bad enough to have to pause, but it's in no way a big performance drain regularly.
pypy is fast compared to plain python, but it's not remotely in the same ballpark as C, Java, Golang
Sure, it's not a big performance drain. For the vast majority of software, it is fine. Usually, the ability to write programs more quickly in eg. Java (not having to care about memory management) outweighs the possible gain of Rust that can reduce memory usage, and total energy usage (because no background thread are needed for GC). I also write most software in Java. Right now, the ergonomics of languages that don't require tracing GC is just too high. But I don't think this is a law of nature; it's just that there a now better languages yet that don't require a tracing GC. The closest is probably Swift, from a memory / energy usage perspective, but it has other issues.
Surprisingly, Java is right behind manual memory managed languages in terms of energy use, due to its GC being so efficient. It turns out that if your GC can "sprint very fast", you can postpone running it till the last second, and memory drains the same amount no matter what kind of garbage it holds. Also, just "booking" that this region is now garbage without doing any work is also cheaper than calling potentially a chain of destructors or incrementing/decrementing counters.
In most cases the later entries in a language for the benchmark game are increasingly hyper-optimized and non-idiomatic for that language, which is exactly where C# will say "Here's some dangerous features, be careful" and the other languages are likely to suggest you use a bare metal language instead.
Presumably the benchmark game doesn't allow "I wrote this code in C" as a Python submission, but it would allow unsafe C# tricks ?
Note: Here are naive un-optimised single-thread programs transliterated line-by-line literal style into different programming languages from the same original.
Unsafe C# is still C# though. Also C# has a lot more control over memory than Java for example, so you don't actually need to use unsafe to be fast. Or are you trying to say that C# is only fast when using unsafe?
Likely just that the fastest implementations in the benchmarks game are using those features and so aren't really a good reflection of the language as it is normally used. This is a problem for any language on the list, really; the fastest implementations are probably not going to reflect idiomatic coding practices.
Here are naive un-optimised single-thread programs transliterated line-by-line literal style into different programming languages from the same original.
> The performance drop and specially the GC of Fil-C do limit the usage however. I read there are some ideas for Fil-C without GC; I would love to hear more about that!
I love how people assume that the GC is the reason for Fil-C being slower than C and that somehow, if it didn't have a GC, it wouldn't be slower.
Well I didn't mean GC is the reason for Fil-C being slower. I mean the performance drop of Fil-C (as described in the article) limits the usage, and the GC (independently) limits the usage.
I understand raw speed (of the main thread) of Fil-C can be faster with tracing GC than Fil-C without. But I think there's a limit on how fast and memory efficient Fil-C can get, given it necessarily has to do a lot of things at runtime, versus compile time. Energy usage, and memory usage or a programming language that uses a tracing GC is higher than one without. At least, if memory management logic can be done at compile time.
For Fil-C, a lot of the memory management logic, and checks, necessarily needs to happen at runtime. Unless if the code is annotated somehow, but then it wouldn't be pure C any longer.
I wonder if some of the Apple provided Clang annotations for bounds checking can be combined with Fil-C?
That then may allow for some of the uses to be statically optimised away, i.e. by annotating pointers upon which arithmetic is not allowed.
The Fil-C capability mechanisms for trapping double-free, and use-after free would probably have to be retained, but maybe it could optimise some uses?
Nim fits most of those descriptors, and it’s become my favorite language to use. Like any language, it’s still a compromise, but it sits in a really nice spot in terms of compromises, at least IMO. Its biggest downsides are all related to its relative “obscurity” (compared to the other mentioned languages) and resulting small ecosystem.
The advantage of Fil-C is that it's C, not some other language. For the problem domain it's most suited to, you'd do C/C++, some other ultra-modern memory-safe C/C++ system, or Rust.
I agree. Nim is memory safe, concise, and fast. In my view, Nim lacks a very clear memory management strategy: it supports ARC, ORC, manual (unsafe) allocation, move semantics. Maybe supporting viewer options would be better? Usually, adding things that are lacking is easier than removing features, specially if the community is small and if you don't want to alienate too many people.
Yes, they might lose the meaningless benchmarks game that gets thrown around, what matters is are they fast enough for the problem that is being solved.
If everyone actually cared about performance above anything else, we wouldn't have an Electron crap crisis.
Seems like Windows is trying to address the Electron problem by adopting React Native for their WinAppSDK. RN is not just a cross-platform solution, but a framework that allows Windows to finally tap into the pool of devs used to that declarative UI paradigm. They appear to be standardizing on TypeScript, with C++ for the performance-critical native parts. They leverage the scene graph directly from WinAppSDK. By prioritizing C++ over C# for extensions and TS for the render code, they might actually hit the sweet spot.
That C++ support that WinUI team marketing keeps talking about relies on a framework that is no longer being developed.
> The reason the issues page only lets you create a bug report is because cppwinrt is in maintenance mode and no longer receiving new feature work. cppwinrt serves an important and specific role, but further feature development risks destabilizing the project. Additional helpers are regularly contributed to complimentary projects such as https://github.com/microsoft/wil/.
I don't know I think what matters is that performance is close to the best you can reasonably get in any other language.
People don't like leaving performance on the table. It feels stupid and it lets competitors have an easy advantage.
The Electron situation is not because people don't care about performance; it's because they care more about some other things (e.g. not having to do 4x the work to get native apps).
Your second paragraph kind of contradicts the last one.
And yes, caring more about other things is why performance isn't the top number one item, and most applications have long stopped being written in pure C or C++ since the early 2000's.
We go even further in several abstraction layers, nowadays with the ongoing uptake of LLMs and agentic workflows in iPaaS low code tools.
Personally at work I haven't written a pure 100% C or C++ application since 1999, always a mix of Tcl, Perl, Python, C# alongside C or C++, private projects is another matter.
Most applications stopped being written in C/C++ when Java first came out - the first memory safe language with mass enterprise adoption. Java was the Rust of the mid-1990s, even though it used a GC which made it a lot slower and clunkier than actual Rust.
I would say that the "first" belongs to Smalltalk, Visual Basic and Delphi.
What Java had going for it was the massive scale of Sun's marketing, and the JDK being available as free beer, however until Eclipse came to be, all IDEs were commercial, and everyone was coding in Emacs, vi (no vim yet), nano, and so on.
However it only became viable after Java 1.3, when Hotspot became part of Java's runtime.
I agree with the spirit of your comment though, and I also think that the blow given by Java to C and C++ wasn't bigger, only because AOT tools were only available under high commercial prices.
Many folks use C and C++, not due to their systems programming features, rather they are the only AOT compiled languages that they know.
> And leverage the C ecosystem, by transpiling to C
I heavily doubt that this would work on arbitrary C compilers reliably as the interpretation of the standard gets really wonky and certain constructs that should work might not even compile. Typically such things target GCC because it has such a large backend of supported architectures. But LLVM supports a large overlapping number too - thats why it’s supported to build the Linux kernel under clang and why Rust can support so many microcontrollers. For Rust, that’s why there’s the rust codegen gcc effort which uses GCC as the backend instead of LLVM to flush out the supported architectures further. But generally transpiration is used as a stopgap for anything in this space, not an ultimate target for lots of reasons, not least of which that there’s optimizations that aren’t legal in C that are in another language that transpilation would inhibit.
> Rust is fast, uses little memory, but us verbose and hard to use (borrow checker).
It’s weird to me that my experience is that it was as hard to pick up the borrow checker as the first time I came upon list comprehension. In essence it’s something new I’d never seen before but once I got it it went into the background noise and is trivial to do most of the time, especially since the compiler infers most lifetimes anyway. Resistance to learning is different than being difficult to learn.
Well "transpiling to C" does include GCC and clang, right? Sure, trying to support _all_ C compilers is nearly impossible, and not what I mean. Quite many languages support transpiling to C (even Go and Lua), but in my view that alone is not sufficient for a C replacement in places like the Linux kernel: for this to work, tracing GC can not be used. And this is what prevents Fil-C and many other languages to be used in that area.
Rust borrow checker: the problem I see is not so much that it's hard to learn, but requires constant effort. In Rust, you are basically forced to use it, even if the code is not performance critical. Sure, Rust also supports reference counting GC, but that is more _verbose_ to use... It should be _simpler_ to use in my view, similar to Python. The main disadvantage of Rust, in my view, is that it's verbose. (Also, there is a tendency to add too many features, similar to C++, but that's a secondary concern).
> Rust also supports reference counting GC, but that is more _verbose_ to use... It should be _simpler_ to use in my view, similar to Python. The main disadvantage of Rust, in my view, is that it's verbose.
I think there's space for Rust to become more ergonomic, but its goals limit just how far it can go. At the same time I think there's space to take Rust and make a Rust# that goes further on the Swift/Scala end of the spectrum, where things like auto-cloning of references are implemented first, that can consume Rust libraries. From the organizational point of you, you can see it as a mix between nightly and editions. From a user's point of view you can look at it as a mode to make refactoring faster, onbiarding easier and a test bed for language evolution. Not being Rust itself it would also allow for different stability guarantees (you can have breaking changes every year), which also means you can be holder on tryin things out knowing you're not permanently stuck with them. People who care about performance, correctness and reuse can still use Rust. People who would be well served by Swift/Scala, have access to Rust's libraries and toolchain.
> (Also, there is a tendency to add too many features, similar to C++, but that's a secondary concern).
These two quoted sentiments seem contradictory: making Rust less verbose to interact with reference counted values would indeed be adding a feature.
Someone, maybe Tolnay?, recently posted a short Go snippet that segfaults because the virtual function table pointer and data pointer aren't copied atomically or mutexed. The same thing works in swift, because neither is thread safe. Swift is also slower than go unless you pass unchecked making it even less safe than go. C#/f# are safer from that particular problem and more performant than either go or swift, but have suffered from the same deserialization attacks that java does. Right now if you want true memory and thread safety, you need to limit a GC language to zero concurrency, use a borrow checker, i.e. rust, or be purely functional which in production would mean haskell. None of those are effortless, and which is easiest depends on you and your problem. Rust is easiest for me, but I keep thinking if I justvwrite enough haskell it will all click. I'm worried if my brain starts working that way about the impacts on things other than writing Haskell.
Replying to myself because a vouch wasn't enough to bring the post back from the dead. They were partially right and educated me. The downvotes were unnecessary. MS did start advising against dangerous deserializers 8yrs ago. They were only deprecated three years ago though, and only removed last year. Some of the remaining are only mostly safe and then only if you follow best practice. So it isn't a problem entirely of the past, but it has gotten a lot better.
Unless you are writing formal proofs nothing is completely safe, GC languages had found a sweet spot until increased concurrency started uncovering thread safety problems. Rust seems to have found a sweet spot that is usable despite the grumbling. It could probably be made a bit easier. The compiler already knows when something needs to be send or synch, and it could just do that invisibly, but that would lead people to code in a way that had lots of locking which is slow and generates deadlocks too often. This way the wordiness of shared mutable state steers you towards avoiding it except when a functional design pattern wouldn't be performant. If you have to use mutex a lot in rust stop fighting the borrow checker and listen to what it is saying.
> C#/f# are safer from that particular problem and more performant than either go or swift, but have suffered from the same deserialization attacks that java does.
Yes. I do like Swift as a language. The main disadvantages of Swift, in my view, are: (A) The lack of an (optional) "ownership" model for memory management. So you _have_ to use reference counting everywhere. That limits the performance. This is measurable: I converted some micro-benchmarks to various languages, and Swift does suffer for the memory managment intensive tasks [1]. (B) Swift is too Apple-centric currently. Sure, this might be become a non-issue over time.
The borrow checker involves documenting the ownership of data throughout the program. That's what people are calling "overly verbose" and saying it "makes comprehensive large-scale refactoring impractical" as an argument against Rust. (And no it doesn't, it's just keeping you honest about what the refactor truly involves.)
The annoying experience with the borrow checker is when following the compiler errors after making a change until you hit a fundamental ownership problem a few levels away from the original change that precludes the change (like ending up with a self referencial borrow). This can bite even experienced developers, depending on how many layers of indirection there are (and sometimes the change that would be adding a single Rc or Cell in a field isn't applicable because it happens in a library you don't control). I do still prefer hitting that wall than having it compile and end up with rare incorrect runtime behaviour (with any luck, a segfault), but it is more annoying than "it just works because the GC dealt with it for me".
There are also limits to what the borrow checker is capable of verifying. There will always be programs which are valid under the rules the borrow checker is enforcing, but the borrow checker rejects.
It's kinda annoying when you run into those. I think I've also ran into a situation where the borrow checker itself wasn't the issue, but rather the way references were created in a pattern match causing the borrow checker to reject the program. That was also annoying.
Polonius hopefully arrives next year and reduces the burden here further. Partial field borrows would be huge so that something like obj.set_bar(obj.foo()) would work.
Given the troubles with shipping Polonius, I imagine that there isn't much more room for improvements in "pure borrow checking" after Polonius, though more precise ways to borrow should improve ergonomics a lot more. You mentioned borrowing just the field; I think self-referential borrows are another.
The borrow checker is an approximation of an ideal model of managing things. In the general case, the guidelines that the borrow checker establishes are a useful way to structure code (though not necessarily the only way), but sometimes the borrow checker simply doesn't accept code that is logically sound. Rust is statically analyzed with an emphasis on safety, so that is the tradeoff made for Rust.
> Quite many languages support transpiling to C (even Go and Lua)
Source? I’m not familiar with official efforts here. I see one in the community for Lua but nothing for Go. It’s rare for languages to use this as anything other than a stopgap or a neat community poc. But my point was precisely this - if you’re only targeting GCC/LLVM, you can just use their backend directly rather than transpiling to C which only buys you some development velocity at the beginning (as in easier to generate that from your frontend vs the intermediate representation) at the cost of a worse binary output (since you have to encode the language semantics on top of the C virtual machine which isn’t necessarily free). Specifically this is why transpile to C makes no sense for Rust - it’s already got all the infrastructure to call the compiler internals directly without having to go through the C frontend.
> Rust borrow checker: the problem I see is not so much that it's hard to learn, but requires constant effort. In Rust, you are basically forced to use it, even if the code is not performance critical
Your only forced to use it when you’re storing references within a struct. In like 99% of all other cases the compiler will correctly infer the lifetimes for you. Not sure when the last time was you tried to write rust code.
> Sure, Rust also supports reference counting GC, but that is more _verbose_ to use... It should be _simpler_ to use in my view, similar to Python.
Any language targeting the performance envelope rust does needs GC to be opt in. And I’m not sure how much extra verbosity there is to wrap the type with RC/Arc unless you’re referring to the need to throw in a RefCell/Mutex to support in place mutation as well, but that goes back to there not being an alternative easy way to simultaneously have safety and no runtime overhead.
> The main disadvantage of Rust, in my view, is that it's verbose.
Sure, but compared to what? It’s actually a lot more concise than C/C++ if you consider how much boilerplate dancing there is with header files and compilation units. And if you start factoring in that few people actually seem to actually know what the rule of 0 is and how to write exception safe code, there’s drastically less verbosity and the verbosity is impossible to use incorrectly. Compared to Python sure, but then go use something like otterlang [1] which gives you close to Rust performance with a syntax closer to Python. But again, it’s a different point on the Pareto frontier - there’s no one language that could rule them all because they’re orthogonal design criteria that conflict with each other. And no one has figured out how to have a cohesive GC that transparently and progressively lets you go between no GC, ref GC and tracing GC despite foundational research a few years back showing that ref GC and tracing GC are part of the same spectrum and high performing implementations in both the to converge on the same set of techniques.
I agree transpile to C will not result in the fastest code (and of course not the fastest toolchain), but having the ability to convert to C does help in some cases. Besides the ability to support some more obscure targets, I found it's useful for building a language, for unit tests [1]. One of the targets, in my case, is the XCC C compiler, which can run in WASM and convert to WASM, and so I built the playground for my language using that.
> transpiling to C (even Go and Lua)
Go: I'm sorry, I thought TinyGo internally converts to C, but it turns out that's not true (any more?). That leaves https://github.com/opd-ai/go2c which uses TinyGo and then converts the LLVM IR to C. So, I'm mistaken, sorry.
> Your only forced to use it when you’re storing references within a struct.
Well, that's quite often, in my view.
> Not sure when the last time was you tried to write rust code.
I'm not a regular user, that's true [2]. But I do have some knowledge in quite many languages now [3] and so I think I have a reasonable understanding of the advantages and disadvantages of Rust as well.
> Any language targeting the performance envelope rust does needs GC to be opt in.
Yes, I fully agree. I just think that Rust has the wrong default: it uses single ownership / borrowing by _default_, and RC/Arc is more like an exception. I think most programs could use RC/Arc by default, and only use ownership / borrowing where performance is critical.
> The main disadvantage of Rust, in my view, is that it's verbose.
>> Sure, but compared to what?
Compared to most languages, actually [4]. Rust is similar to Java and Zig in this regard. Sure, we can argue the use case of Rust is different than eg. Python.
> I'm not a regular user, that's true [2]. But I do have some knowledge in quite many languages now [3] and so I think I have a reasonable understanding of the advantages and disadvantages of Rust as well.
That is skewing your perception. The problem is that how you write code just changes after a while and both things happen: you know how to write things to leverage the compiler inferred lifetimes better and the lifetimes fade into the noise. It only seems really annoying, difficult and verbose at first which is what can skew your perception if you don’t actually commit to writing a lot of code and reading others’ code so that you become familiar with it better.
> Compared to most languages, actually [4]. Rust is similar to Java and Zig in this regard. Sure, we can argue the use case of Rust is different than eg. Python.
That these are the languages you’re comparing of is a point in Rust’s favor - it’s targeting a significantly lower level and higher performance of language. So Java is not comparable at all. Zig however nice is fundamentally not a safe language (more like C with fewer razor blades) and is inappropriate from that perspective. Like I said - it fits a completely different Pareto frontier - it’s strictly better than C/C++ on every front (even with the borrow checker it’s faster and less painful development) and people are considering it in the same breath as Go (also unsafe and not as fast), Java (safe but not as fast) and Python (very concise but super slow and code is often low quality historically).
There are surprisingly many languages that support transpiling to C: Python (via Cython), Go (via TinyGo), Lua (via eLua), Nim, Zig, Vlang. The main advantage (in my view) is to support embedded systems, which might not match your use case.
Pypy is great for performance. I'm writing my own programming language (that transpiles to C) and for this purpose converted a few benchmarks to some popular languages (C, Java, Rust, Swift, Python, Go, Nim, Zig, V). Most languages have similar performance, except for Python, which is about 50 times slower [1]. But with PyPy, performance is much better. I don't know the limitations of PyPy because these algorithms are very simple.
But even thougt Python is very slow, it is still very popular. So the language itself must be very good in my view, otherwise fewer people would use it.
Yes, there are quite many real-world cases of this architecure. But, wouldn't it be better it the same language (more or less) can be used for both? I don't think such a language exists currently, but I think it would be a nice goal.
What would be the point of the second language if it was basically the same language as the first?
In general you want a solid engineering language with a focus on correctness, and a second "scripting" language focusing on quick development.
Second if the languages are very similar but not the same it seems like you would see "confusion" errors where people accidently use lang 1 in lang 2's context or vice versa.
Interesting, I was not aware of OxCaml. (If this is what you mean.) It does seem to tick a few boxes actually. For my taste, the syntax is not as concise / clean as Python, and for me, it is "too functional". It shares with Python (and many high-level languages) the tracing garbage collection, but maybe that is the price to pay for an easy-to-use memory safe language.
In my view, a "better" language would be a simple language as concise as Python, but fully typed (via type inference); memory safe, but without the need of tracing GC. I think memory management should be a mix of Swift and Rust (that is, a mix of reference counting and single ownership with borrowing, where need for speed).
I fully agree. The challenge is, some will want to use the latest languages and technologies because they want to learn it (personal development, meaning: the next job). Sometimes the "new thing" can be limited to (non-critical) testing and utilities. But having many languages and technologies just increases the friction, complicates things, and prevents refactoring. Even mixing just scripts with regular languages is a problem; calling one language from another is similar. The same with unnecessary remote APIs. Less technologies is often better, even if the technologies are not the best (eg. using PostgreSQL for features like fulltext search, event processing, etc.)
This is a bit related to external dependencies vs build yourself (AKA reinvent the wheel). Quite often the external library, long term, causes more issues than building it yourself (assuming you _can_ build a competent implementation).
> This is a bit related to external dependencies vs build yourself (AKA reinvent the wheel). Quite often the external library, long term, causes more issues than building it yourself (assuming you _can_ build a competent implementation).
I feel like this happens mostly because simpler is better, and most of these dependencies don’t follow a good “UNIX” philosophy of modularity, being generic etc. something that you’d notice the standard libraries try to achieve.
Most of these third party dependencies are just a very specific feature that starts to add more use cases until it becomes bloated to support multiple users with slightly different needs.
Yep, true in my experience as well. And in the age of LLMs it is not so difficult to ask it to extract just this or that piece of functionality into another package but with a different API. So these days it's even easier to roll your own stuff. It's not such a huge time sink as it sometimes was before.
What one considers a "good bread" or "good bakery" depends on the person. I'm from Switzerland. When I was in the United States (Bay Area, San Francisco), in 2000-2003, I did _not_ find what I consider a "good bread". I did find "bakery".
I mean, in San Francisco, you’ll find plenty of good bread and pastries, it’s the only mid size city in the US that has enough French people to have two competing French language schools for kiddos.
The author of Fil-C does have some ideas to avoid a garbage collector [1], in summary: Use-after-free at worst means you might see an object of the same size, but you can not corrupt data structures (no pointer / integer confusion). This would be more secure than standard C, but less secure than Fil-C with GC.
I agree. The main advantage of Fil-C is compatibility with C, in a secure way. The disadvantages are speed, and garbage collection. (Even thought, I read that garbage collection might not be needed in some cases; I would be very interested in knowing more details).
For new code, I would not use Fil-C. For kernel and low-level tools, other languages seem better. Right now, Rust is the only popular language in this space that doesn't have these disadvantages. But in my view, Rust also has issues, specially the borrow checker, and code verbosity. Maybe in the future there will be a language that resolves these issues as well (as a hobby, I'm trying to build such a language). But right now, Rust seems to be the best choice for the kernel (for code that needs to be fast and secure).
How does that compare with rust? You don't happen to have an example of a binary underway moving to rust in Ubuntu-land as well? Curious to see as I honestly don't know whether rust is nimble like C or not.
My impression is - rust fares a bit better on RAM footprint, and about as badly on disk binary size. It's darn hard to compare apples-to-apples, though - given it's a different language, so everything is a rewrite. One example:
Ubuntu 25.10's rust "coreutils" multicall binary: 10828088 bytes on disk, 7396 KB in RAM while doing "sleep".
Alpine 3.22's GNU "coreutils" multicall binary: 1057280 bytes on disk, 2320 KB in RAM while doing "sleep".
I don't have numbers, but Rust is also terrible for binary size. Large Rust binaries can be improved with various efforts, but it's not friendly by default. Rust focuses on runtime performance, high-level programming, and compile-time guarantees, but compile times and binary sizes are the drawback. Notably, Rust prefers static linking.