Exactly. And I’m one of those that uses Firefox sync, and prefers all the things Firefox comes with, including the developer tools. The only thing it lacks is the integrated Google Lighthouse reporting.
Of course, if your program compiles, that doesn't mean the logic is correct. However, if your program compiles _and_ the logic is correct, there's a high likelihood that your program won't crash (provided you handle errors and such, you cannot trust data coming from outside, allocations to always work, etc). In Rust's case, this means that the compiler is much more restrictive, exhaustive and pedantic than others like C's and C++'s.
In those languages, correct logic and getting the program to compile doesn't guarantee you are free from data races or segmentation faults.
Also, Rust's type system being so strong, it allows you to encode so many invariants that it makes implementing the correct logic easier (although not simpler).
> However, if your program compiles _and_ the logic is correct, there's a high likelihood that your program won't crash (provided you handle errors and such, you cannot trust data coming from outside, allocations to always work, etc).
That is one hell of a copium disclaimer. "If you hold it right..."
Rust certainly doesn't make it impossible to write bad code. What it does do is nudge you towards writing good code to a noticeably appreciable degree, which is laudable compared to the state of the industry at large.
I feel like you're attacking a strawman here. Of course you can write unreliable software in Rust. I'm not aware of anyone who says you can't. The point is not that it's a magic talisman that makes your software good, the point is that it helps you to make your software good in ways other languages (in particular C/C++ which are the primary point of comparison for Rust) do not. That's all.
> The point is not that it's a magic talisman that makes your software good, the point is that it helps you to make your software good in ways other languages (in particular C/C++ which are the primary point of comparison for Rust) do not.
>In those languages, correct logic and getting the program to compile doesn't guarantee you are free from data races or segmentation faults.
I don't believe that it's guaranteed in Rust either, despite much marketing to the contrary. It just doesn't sound appealing to say "somewhat reduces many common problems" lol
>Also, Rust's type system being so strong, it allows you to encode so many invariants that it makes implementing the correct logic easier (although not simpler).
C++ has a strong type system too, probably fancier than Rust's or at least similar. Most people do not want to write complex type system constraints. I'm guessing that at most 25% of C++ codebases at most use complex templates with recursive templates, traits, concepts, `requires`, etc.
Comparing type systems is difficult, but the general experience is that it is significantly easier to encode logic invariants in Rust than in C++.
Some of the things you can do, often with a wild amount of boilerplate (tagged unions, niches, etc.), and some of the things are fundamentally impossible (movable non-null owning references).
C++ templates are more powerful than Rust generics, but the available tools in Rust are more sophisticated.
Note that while C++ templates are more powerful than Rust generics at being able to express different patterns of code, Rust generics are better at producing useful error messages. To me, personally, good error messages are the most fundamental part of a compiler frontend.
True but you lose out on much of the functionality of templates, right? Also you only get errors when instantiating concretely, rather than getting errors within the template definition.
No, concepts interoperate with templates. I guess if you consider duck typing to be a feature, then using concepts can put constraints on that, but that is literally the purpose of them and nobody makes you use them.
If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later? This behavior is in fact used to decide between alternative template specializations for the same template. Concepts do it better in some ways.
> If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later?
Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.
A big concern here would be accidentally depending on something that isn't declared in the concept, which can result in a downstream consumer who otherwise satisfies the concept being unable to use the template. You also don't get nicer error messages in these cases since as far as concepts are concerned nothing is wrong.
It's a tradeoff, as usual. You get more flexibility but get fewer guarantees in return.
Of course what you are describing is possible, but those scenarios seem contrived to me. If you have reasonable designs I think they are unlikely to come up.
>Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.
What I meant is, if the thing is not instantiated then it is not used. Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that. Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to. But that's not a problem with the language.
> Of course what you are describing is possible, but those scenarios seem contrived to me. If you have reasonable designs I think they are unlikely to come up.
I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P
As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled. IIRC Swift takes advantage of this (polymorphic generics by default with optional monomorphization) and the Rust devs are also looking into it (albeit the other way around).
> Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that.
I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.
> Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to.
Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.
>I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P
The actual effects depend on a lot of things. I'm just saying, it seems contrived to me, and the most likely outcome of this type of broken template is failed compilation.
>As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled.
This is incompatible with how C++ templates work. There are methods to separately compile much of a template. If concepts could be made into concrete classes and used without direct inheritance, it might work. But this would require runtime concepts checking I think. I've never tried to dynamic_cast to a concepts type, but that would essentially be required to do it well. In practice, you can still do this without concepts by making mixins and concrete classes. It kinda sucks to have to use more inheritance sometimes, but I think one can easily design a program to avoid these problems.
>I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.
This sounds wrong to me. Template parameters plus template code actually turns into real code. Until you actually pass in some concrete parameters to instantiate, you can't test anything. That's what I mean by saying it's "unavoidable". No language I can dream of that has generics could do any different.
>Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.
I wasn't prescribing a fix. I was describing a new type of error that can't be detected automatically (and which it would not be reasonable for a language to try to detect). If the template requires `foo()` and you just create an empty function that does not satisfy the semantic intent of the thing, you will make something compile but may not actually make it work.
Sure. Contrivance is in the eye of the beholder for this kind of thing, I think.
> and the most likely outcome of this type of broken template is failed compilation.
I don't think that was ever in question? It's "just" a matter of when/where said failure occurs.
> This is incompatible with how C++ templates work.
Right, hence "tangentially related". I didn't mean to imply that the aside is applicable to C++ templates, even if it could hypothetically be. Just thought it was a neat capability.
> This sounds wrong to me.
Wrong how? Definition checking was undeniably part of the original C++0x concepts proposal [0]. As for some reasons for its later removal, from Stroustrup [1]:
> [W]e very deliberately decided not to include [template definition checking using concepts] in the initial concept design:
> [Snip of other points weighing against adding definition checking]
> By checking definitions, we would complicate transition from older, unconstrained code
to concept-based templates.
> [Snip of one more point]
> The last two points are crucial:
> A typical template calls other templates in its implementation. Unless a template using concepts can call a template from a library that does not, a library with the concepts cannot use an older library before that library has been modernized. That’s a serious problem, especially when the two libraries are developed, maintained, and used by more than one organization. Gradual adoption of concepts is essential in many code bases.
And Andrew Sutton [2]:
> The design for C++20 is the full design. Part of that design was to ensure that definition checking could be added later, which we did. There was never a guarantee that definition checking would be added later.
> To do that, you would need to bring a paper to EWG and convince that group that it's the right thing to do, despite all the ways it's going to break existing code, hurt migration to constrained templates, and make generic programming even more difficult.
I probably could have used a more precise term than "backwards compatibility", to be fair.
> Until you actually pass in some concrete parameters to instantiate, you can't test anything. That's what I mean by saying it's "unavoidable".
I'm a bit worried I'm misunderstanding you here? It's true that C++ as it is now requires you to instantiate templates to test anything, but what I was trying to say is that changing the language to avoid that requirement runs into migration/backwards compatibility concerns.
> No language I can dream of that has generics could do any different.
I've mentioned Swift and Rust already as languages with generics and definition-site checking. C# is another example, I believe. Do those not count?
> I wasn't prescribing a fix. I was describing a new type of error that can't be detected automatically (and which it would not be reasonable for a language to try to detect). If the template requires `foo()` and you just create an empty function that does not satisfy the semantic intent of the thing, you will make something compile but may not actually make it work.
My apologies for the misdirected focus.
In any case, that type of error might be "new" in the context of the conversation so far, but it's not "new" in the PL sense since that's basically Rice's theorem in a nutshell. No real way around it beyond lifting semantics into syntax, which of course comes with its own tradeoffs.
That is all very good information. I don't often get into the standards and discussions about the stuff. Maybe ChatGPT or something can help me find interesting topics like this one but it hasn't come up so much for me yet.
>I'm a bit worried I'm misunderstanding you here? It's true that C++ as it is now requires you to instantiate templates to test anything, but what I was trying to say is that changing the language to avoid that requirement runs into migration/backwards compatibility concerns.
I see now. I could imagine a world where templates are compiled separately and there is essentially duck typing built into the runtime. For example, if the template parameter type is a concept, your type could be automatically hooked up as if it was just a normal class and you inherited from it. If we had reflection, I think this could also be worked out at compile time somehow. But I'm not very up to speed with what has been tried in this space. I'm guessing that concept definitions can be very extensive and also depend on complex expressions. That sounds hairy compared to what could be done without concepts, for example with an abstract class.
> I could imagine a world where templates are compiled separately and there is essentially duck typing built into the runtime.
The bit of my comment you quoted was just talking about definition checking. Separate compilation of templates is a distinct concern and would be an entirely new can of worms. I'm not sure if separate compilation of templates as they currently are is possible at all; at least off the top of my head there would need to be some kind of tradeoff/restriction added (opting into runtime polymorphism, restricting the types that can be used for instantiation, etc.).
I think both definition checking and separate compilation would be interesting to explore, but I suspect backwards compat and/or migration difficulties would make it hard, if not impossible, to add either feature to standard C++.
> For example, if the template parameter type is a concept, your type could be automatically hooked up as if it was just a normal class and you inherited from it.
Sounds a bit like `dyn Trait` from Rust or one of the myriad type erasure polymorphism libraries in C++ (Folly.Poly [0], Proxy [1], etc.). Not saying those are precisely on point, though; just thought some of the ideas were similar.
> but you lose out on much of the functionality of templates, right?
I don't think so? From my understanding what you can do with concepts isn't much different from what you can do with SFINAE. It (primarily?) just allows for friendlier diagnostics further up in the call chain.
You're right but concepts do more than SFINAE, and with much less code. Concept matching is also interesting. There is a notion of the most specific concept that matches a given instantiation. The most specific concept wins, of course.
I don't agree that Rust tools are more sophisticated and they definitely are not more abundant. You just have a language that is more anal up front. C++ has many different compilers, analyzers, debuggers, linting tools, leak detectors, profilers, etc. It turns out that 40 years of use leads to significant development that is hard to rebuild from scratch.
I seem to have struck a nerve with my post, which got 4 downvotes so far. Just for saying Rust is not actually better than C++ in this one regard lol.
Thry have to adhere to their marketing words and numbers like "efficiency increase of 99999% in performance per dollar per token per watt per U-235 atom used"
So, people start pretty early with rafts. A raft isn't a boat it's just a collection of stuff which floats ie is buoyant - so, with care, you can board the raft and cross a stretch of water without swimming, which is convenient. Boats incrementally improve on this by having a distinct "inside" of the boat which needn't be buoyant, separated from the outside by waterproofing. A canoe or a coracle would be examples of boats you can easily invent once you've seen rafts.
Most easy to invent types of boat are great if there are no waves. On a river there are basically never waves (yes rapids exist, no that's not common)
However at sea waves are commonplace. Situations where waves are minimal are extremely rare, usually occurring seasonally, when tides are smaller than usual and weather is calm. Sea Lion (the never attempted German invasion of mainland Britain) was predicated on absolutely calm sea because it would have used towed river barges to land troops. If there's a moderate sea but you green light the operation anyway, all your infantry drown and you've just lost the war immediately.
To be successful at sea you want even more buoyancy, to put the top of the waterproof outer parts of the boat above the waves, and you probably also want a keel, rather than having the vessel's bottom flat and sort of resting on the water which won't work well with waves. None of this is impossible, or even especially difficult with quite ancient technology, but it's not trivial, you definitely won't go from rafts to ocean-going freight transport in one attempt.
Yes lol. I should have a thousand points by now probably, but every time I get on a streak of telling people uncomfortable truths they knock me down like 50 points.
That’s the philosophy. Use the less constrained (but still somewhat constrained and borrow checked) unsafe to wrap/build the low level stuff, and expose a safe public API. That way you limit the exposure of human errors in unsafe code to a few key parts that can be well understood and tested.
reply