LOL, nope. Those annotations must be part of the type system (e.g. `&mut T` in Rust) and must be checked by the compiler (the borrow checker). The language can provide escape hatches like `unsafe`, but they should be rarely used. Without it you get a fragile footgunny mess.
Just look at the utter failure of `restrict`. It was so rarely used in C that it took several years of constant nagging from Rust developers to iron out various bugs in compilers caused by it.
Does make me wonder what restrict-related bugs will be (have been?) uncovered in GCC, if any. Or whether the GCC devs saw what LLVM went through and decided to try to address any issues preemptively.
Possibly? LLVM had been around for a while as well but Rust still ended up running into aliasing-related optimizer bugs.
Now that I think about it some more, perhaps gfortran might be a differentiating factor? Not familiar enough with Fortran to guess as to how much it would exercise aliasing-related optimizations, though.
This data is publically available to anyone in Sweden:
Your salary (well, last years taxable income), debts/credit rating, criminal history, address, phone number, which vehicles and properties you own and which company boards you're on.
One of organized criminals biggest income these days are scamming rich old folks because it's so trivial to get all details needed (and who to target) to be a pretty convincing bankman, IRS type agent/etc.
Some of it you have to kind of manually request at various places, but it's all available.
So data breaches aren't really that big of a deal when everything is already public.
One great way you can make things more secure is by reducing attack surface. sudo is huge and old, and has tons of functionality that almost no one uses (like --chroot). A from-scratch rewrite with a focus on the 5% of features that 99% of users use means less code to test and audit. Also a newer codebase that hasn't grown and mutated over the course of 35 years is going to be a lot more focused and easier to reason about.
I agree, but the current specification is complex too: two identical "tagged" structs are compatible, two identical "untagged" structs are not. And before C23 it was even worse, depending on whether the two structs were defined in the same file or not.
We're applying a patch over a patch over a patch... no surprise the end result looks like a patchwork!
Sure, but _Record would add even more complexity. The tag rules I had changed in C23 were a step to remove complexity, so a step towards cleaning it up. I wasn't able to fix the untagged case, because WG14 had concerns, but I think these can be addressed, making another step. It is always much harder to undo complexity than to add it.
Pick an appropriate base type (uintN_t) for a bitset, make an array of those (K * N/4) and write a couple inline functions or macros to set and clear those bits.
simd doesnt make much sense as a standard feature/library for a general purpose language.
If you're doing simd its because you're doing something particular for a particular machine and you want to leverage platform specific instructions, so thats why intrinsics (or hell, even externally linked blobs written in asm) is the way to go and C supports that just fine.
But sure, if all youre doing is dot products I guess you can write a standard function that will work on most simd platforms, but who cares, use a linalg library instead.
Machine code is generally much safer than C - e.g. it usually lacks undefined behaviour. If you're unsure about how a given piece of machine code behaves, it's usually sufficient to test it empirically.
Not true on RISC-V. That's full of undefined behaviour.
But anyway this is kind of off-topic. I think OutOfHere was imagining that this somehow skips the type checking and borrow checking steps which of course it doesn't.
What's all that undefined behavior? Closest I can think of is executing unsupported instructions, but you have to mess up pretty hard for that to happen, and you're not gonna get predictable behavior here anyway (and sane hardware will trap of course; and executing random memory as instructions is effectively UB on any architecture).
(there's a good bit of unpredictable behavior (RVV tail-agnostic elements, specific vsetvl result), but unpredictable behavior includes any multithreading in any architecture and even Rust (among other languages))
Fair point on CSRs, though I'd count that as a subset of unsupported/not-yet-specified instructions; pretty sure all of the "reserved"s in the spec are effectively not-yet-defined instructions too, which'll have equivalents in any architecture with encoding space left for future extensions, not at all unique to RISC-V.
But yeah, no try-running-potentially-unsupported-things-to-discover-what-is-supported; essentially a necessary property for an open ISA as there's nothing preventing a vendor from adding random custom garbage in encoding space they don't use.
Yeah I guess the difference is once an instruction/CSR has been defined in x86 or ARM the only two options are a) it doesn't exist, and b) it's that instruction.
In RISC-V it can be anything even after it has been defined.
Actually... I say that, but they do actually reserve spaces in the CSR and opcode maps specifically for custom extensions so in theory they could say it's only undefined behaviour in those spaces and then you would be able to probe in the standard spaces. Maybe.
I think they just don't want people probing though, even though IMO it's the most expedient solution most of the time. Otherwise you have to go via an OS syscall, through the firmware and ACPI tables, device tree or mconfigptr (when they eventually define that).
On getting supported extension status - there's a C API spec that could potentially become an option for an OS-agnostic way: https://github.com/riscv-non-isa/riscv-c-api-doc/blob/main/s.... libc already will want to call whatever OS thing to determine what extensions it can use for memcpy etc, so taking the results from libc is "free".
Not any different from C - a given C compiler + platform will behave completetly deterministically and you can test the output and see what it does, regardless of UB or not.
> a given C compiler + platform will behave completetly deterministically and you can test the output and see what it does, regardless of UB or not.
Sure[1], but that doesn't mean it's safe to publish that C code - the next version of that same compiler on that same platform might do something very different. With machine code (especially x86, with its very friendly memory model) that's unlikely.
(There are cases like unused instructions becoming used in never revisions of a processor - but you wouldn't be using those unused instructions in the first place. Whereas it's extremely common to have C code that looks like it's doing something useful, and is doing that useful thing when compiled with a particular compiler, but is nevertheless undefined behaviour that will do something different in a future version)
[1] Build nondeterminism does exist, but it's not my main concern
CPUs get microcode updates all the time, too. Nothing is safe from bitrot unless you’re dedicated to 100% reproducible builds and build on the exact same box you’re running on. (…I’m not, for the record - but the more, the merrier.)
To fix bugs, sure. They don't generally get updates that contain new optimizations that radically break existing machine code, justifying this by saying that the existing code violated some spec.
Extremely unlikely. CPU bugs generally halt the CPU or fail to write the result or something like that. The Pentium FDIV bug where it would give a plausible but wrong result was a once in a lifetime thing.
Maybe. But when carefully investigated, the overwhelming majority of C code does in fact use undefined behaviour, and there is no practical way to verify that any given code doesn't.
Or it could be made faster because certain manual optimizations become possible.
An example would a table of interned strings that you wanna match against (say you're writing a parser).
Since standard C says thou shall not compare pointers with < or > unless they both point into the same 'object' you are forbidden from doing the speed of light code:
What you describe there is UB. If you define this in the standard, you are defining a kind of runtime behavior that can never happen in a well formed program and the compiler does not have to make a program that encounters this behavior do anything in particular.
reply