Hacker Newsnew | past | comments | ask | show | jobs | submit | TinkersW's commentslogin

I am surprised Intels server chips can only do 2 AVX512 ops per cycle, that is rather sad given how long they have supported it in server chips, and I hope isn't a sign of things to come with Nova Lake.

Nova lake looks potentially pretty good, AVX512/APX and very very high core count, so maybe we will see AMD have some competition next year.

The article is simple wrong, dithering is still widely used, and no we do not have enough color depth to avoid it. Go render a blue sky gradient without dithering, you will see obvious bands.

Yep, even high quality 24-bit uncompressed imagery often benefits from dithering, especially if it's synthetically generated and, even if it's natural imagery, if it's processed or manipulated - even mildly - it'll probably benefit from dithering. If it's a digital photograph, it was probably already dithered during the de-bayering process.

The python example looks fixable with a reentrant mutex, no idea if that translates to the Rust issue.

Ya ever heard of this thing called a debugger? They have this amazing ability to show you what the problem is right when it happens!

Crashes can be difficult to repro, especially if they occur in rare error paths and your software distribution mechanism doesn't yield you in field telemetry. Of course even rust isn't going to catch all issues at compile time (a lot of checks at runtime result in panics), but it does seem to catch many if not most, which is very helpful. This is much like the argument for static typing.

I've used many, debugging both dumps and live processes. And I'll take a compiler that highlights the problem at build time any day.

I think memory safety is fine, but I plan to do it in C++ not Rust-- nothing in this article is remotely new either, just repeating the same tired stuff.

It seems pretty clear statistical hardware level memory safety is coming(Apple has it, Intel/AMD have a document saying they are planning to add it), the safety zealots can turn that on, or you can use FilC if you need absolute memory safety, but really C++ with various hardening features turned on is already fairly solid.

Also I think that memory safety is generally less important than thread safety(because memory safety is rather easy to achieve and detect violations), which almost all those languages recommended by this article blow chunks at. Rust could actual make a solid argument here, instead of wasting time yammering about memory safety.


Rust’s thread safety story is a subset of broader memory safety - it just guarantees that concurrent programs still are memory safe. This also happens to correspond to the most frequent sources of bugs in concurrent programs, but it’s not all there is to thread safety.

It’s talked about all the time but whenever people talk about memory safety, thread safety is implied. Statistical hardware memory safety is more a security feature. But knowing that your code is correct before you even try it is a huge productivity boost and valuable in and of itself for all sorts of reasons.

It’s weird the pushback from c++ people considering that Rust makes high performing code easier to achieve because of rules around aliasing and auto-optimizing struct layouts among other things. This is stuff c++ will never get.


Some days ago I took again a look at Rust.

I don't like the syntax at all, and coming from a Scala background that should say a lot. I will never like it, unless they simplify it.

However, they implemented at compile time all the rules that someone wrote in the CPP guidelines about memory, or in dozens of books about how to write safe code.

Now you can say: so you see, if you are good at c++, you achieve the same as Rust.

True. But I don't buy it. I don't believe that even hardcore c++ developers never make a memory related mistake. I have been in the industry for a while and shit does happen, even to the best of us, unless you have "guardrails" around, and even then, so many things can go wrong...

So a few days ago I asked on HN why not to use hardened allocators, which would basically block entirely the possibility to deploy such mistakes, a bit like OpenBSD allocator.

It seems 1) people who develop low level stuff don't even need the heap, or use anyways a limited subset of things, 2) such allocators slow down the runtime significantly.

So what I hear is:

1) I know what I am doing, I don't need a compiler to tell me that.

2) I anyway need a subset of that, so ... who cares.

3) runtime tooling helps me.

Except for 2 (like writing something for Arduino or by using some very specific subsets of C/C++), everything else sounds pretty much hypocrite to me, and the proof is that even the top of the top of the engineers have to fix double free or things like that.

If C++ broke the ABI and decided tomorrow to adopt memory safety at compile time, you would see that most people would use it. However, this holy war against Rust "because" is really antithetical of our industry which should be driven by educated people that use the right tool for the right job. As I said, ok for very low level/subsets, but why would you start today a CLI tool in C/C++ (if not for some very specific reasons )?

For me this has more with fear of learning/doing something new. It makes no sense otherwise to reject a language that guarantees that a random developer missed the lifetime of that object etc... A language that is only improving, not dying anytime soon, with one of the fastest growing ecosystem, not "one person's weekend project" or things like that.

As I mentioned multiple times, I dislike particularly Rust syntax. I love the C syntax the most, although C++ has deviated from that significantly with templates etc. But if something is better suited, sorry, we need to use it. If the C/C++ committee are the slowest things on the planet, why should my company/software pay for that? Everyone said python would die after the backward incompatible 2->3 upgrade. Sure.


I have found Rust syntax a breath of fresh air as compared with C/C++ and I was doing c/C++ for over 15 years before I started with Rust. No arbitrary semicolons, everything is an expression leading to more concise code, no need to worry about the interior layout and excess padding of a struct, function declarations that are trivial to read, magical inference all over the place, variable shadowing that’s actually safe, etc etc. traits that let you duck type behavior in a 0 overhead way elegantly. A proper macro system. No weird language rules about having to know the informal rule of 0/5 or failing to manage resources or code that fails to manage resources in an exceptional safe way.

It’s alien and unfamiliar, but I actually don’t have problems with the syntax itself. Swift has the “everything is an expression” property. Python’s list comprehension felt weird and confusing the first time I encountered it. Typescript (&Go) has traits after a fashion. In fact Go syntax I find particularly ugly even though it’s “simpler”


Efficiently implementing a doubly linked list in C or C++ is easy. In Rust, less so.[0]

And the prevalence and difficulty of unsafe means both that Rust is not memory safe [1], and that Rust sometimes is less memory safe than C or C++.

[0]: https://rust-unofficial.github.io/too-many-lists/

[1]: For an example of memory unsafety in Rust: https://materialize.com/blog/rust-concurrency-bug-unbounded-...


You can write a linked list in Rust the same was as C or C++ with unsafe, so it as least as easy.

While I think it’s a troll account, it is technically true that the rules around unsafe Rust are a little harder to get exactly right to avoid UB because you still have to uphold the much larger surface area of rules in safe Rust without any of the compiler help. C++ by contrast of course has fewer such rules and they’re easier to reason about but of course there’s no compiler warning when you violate them.

On the other hand that line of argument is kind of weak sauce because the vast majority of bugs aren’t in complicated recursive data structures. And you can always implement doubly-linked list in pure safe rust just with slightly more overhead by using Rc/Arc if you wanted guarantees (and you can also verify the unsafe implementation using Miri which is a significantly strong runtime checker than c++ where you only have ASAN/UBSAN)


Did you actually create this account just to hate on Rust?

The author refers to casting ints to floats but seems to actually be talking about converting. Casting is when you change the type, but don't change the data..

I don't really think much of Zig myself for other reasons, but comptime seems like a good design.


> Casting is when you change the type, but don't change the data..

Is that the case? That's not what I think of when I think of C-style casts.

    float val = 12.4;
    int val_i = (int) val;
The representation in memory of `val` should not match that of `val_i`, right? The value is encoded differently and the quantity is not preserved through this transformation. I don't think that means that the data weren't changed.

Maybe you're thinking of aliasing/type-punning? Casts in C do perform conversions as they do in C++.


Casting and type conversion are synonyms: https://en.wikipedia.org/wiki/Type_conversion

The post didn't say "type conversion", just conversion, like the int with value 3 landing in memory after calling atoi("3").

The post gave the example of casting a float to an int. That is a (type) conversion.

The point is that certain casts/type conversions can change the underlying data.

> like the int with value 3 landing in memory after calling atoi("3").

That's something else entirely. People may colloquially call this "converting a string to an integer", but what we're realling doing here is parsing a string as an integer.


If you had read like even the basic part of that article you would know that && is not a pointer to a pointer.

Anyway C++ isn't as complicated as people say, most of the so called complexity exists for a reason, so if you understand the reasoning it tends to make logical sense.

You can also mostly just stick to the core subset of the language, and only use the more obscure stuff when it is actually needed(which isn't that often, but I'm glad it exists when I need it). And move semantics is not hard to understand IMO.


> Anyway C++ isn't as complicated as people say, most of the so called complexity exists for a reason, so if you understand the reasoning it tends to make logical sense.

I think there was a comment on HN by Walter Bright, saying that at some point, C++ became too complex to be fully understood by a single person.

> You can also mostly just stick to the core subset of the language

This works well for tightly controlled codebases (e.g. Quake by Carmack), but I'm not sure how this work in general, especially when project owners change over time.


> If you had read like even the basic part of that article you would know that && is not a pointer to a pointer.

OK, let me ask this: what is "&&" ? Is it a boolean AND ? Where in that article is it explained what "&&" is, other than just handwaving, saying "it's an rvalue".

For someone who's used to seeing "&" as an "address of" operator (or, a pointer), why wouldn't "&&" mean "address of pointer" ?


Your comments are very confusing.

> For someone who's used to seeing "&" as an "address of" operator (or, a pointer)

You must be talking about "&something" which takes the "address of something" but the OP does not talk about this at all. You know this because you wrote in your other comment ...

> And if this indeed the case, why is int&& x compatible with int& y ?

So you clearly understand the OP is discussing "int&&" and "int&". Those are totally different from "&something". Even a cursory reading of the OP should tell you these are references, not the "address of something" that you're probably more familiar with.

One is rvalue reference and the other is lvalue reference and I agree that the article could have explained it better what they mean. But the OP doesn't seem to be an introductory piece. It's clearly aimed at intermediate to advanced C++ developers. What I find confusing is that you're mixing up something specific like "int&&" with "&something", which are entirely different concepts.

I mean when have you ever seen "int&" to be "address of" or "pointer"? You have only seen "&something" and "int*" and "int**" be "address of" or "pointer", haven't you?


Unless, you work with a large team of astronauts who ignore the coding guidelines that say to stick with a core subset but leadership doesn't reign them in and eventually you end up with a grotesque tower of babel with untold horrors that even experienced coders will be sickened by.

So you have to write fugly code just to get something that should be a compiler switch?

My experience was that every game ran.. but also nearly every game had issues with controls being different than Windows(mouse sensitivity was way off), control pad not working, screen or font scaling issues, and full screen wonkyness.

Somehow changing the font scaling in Linux caused the game to be scaled by a similiar amount.. so 2x font scaling = full screen is 2x bigger than actual monitor.. and I can only see 1/4th the screen.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: