Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think any reasonable C++ programmer should expect that adding two 16 bit uints of values 50000 will be greater than what a 16bit uint can hold and will overflow.


Actually, what will happen according to the C++ standard depends on the size of int.

C++ is doing implicit integer promotion of integer variables with types smaller than int, and that promotion converts those values (of the operands of the +) to int (gross generalization yeah yeah).

So the result on g++ amd64 will be 100000 (as you would expect) if int is more than 16 bits (nowadays it is), even WITH `(uint16_t(50000) + uint16_t(50000))`.

I've also tried it in MSVC 2010 and it says the result of `std::cout << (uint16_t(50000) + uint16_t(50000)) << std::endl` is 100000 (both on win32 and on x64).

Try it on an arduino and you will get 34464 (with g++ targeting 8 bit atmel).

Think you want implicit integer promotion in a systems language? You really don't. They are an unnecessary language feature.

Also, the article is only tangentially about that--that's just an intro. The actual body makes very good points, and I think it's more than a little tongue-in-cheek :)


Apparently, as the GP points out, the article author meant to write asterisk where he wrote plus.

uint16(50_000) * uint16(50_000) is uint32(2_500_000_000), which turns out to be int32(-1_794_967_296), the garbage result the author cites.


To paraphrase Dijkstra, "the use of C++ cripples the mind; its teaching should, therefore, be regarded as a criminal offence". Maybe a reasonable C++ programmer would expect that, sure, but: should a reasonable programmer expect that? Automatic promotion to reasonably sized integers (and in the limit, integers of unlimited precision) is ancient tech.


> just promote my arithmetic to bigints randomly, #YOLO


> Just throw away the most-significant digits randomly, #YOLO


Please do not put your own words in Dijkstra's mouth: he said exactly what needed to be said.


That’s not the issue, though, because the operands are promoted to int, which in that example happens to be 32-bit. The actual issue is that the product 50000 * 50000 = 2500000000 exceeds even the 32-bit 2^32-1 = 2147483647 and due to modulo arithmetics results in 2500000000 - 2^32 = -1794967296.

If int was 16-bit (which it is allowed to be), no integer promotion would take place, and the result would be 50000^2 mod 2^16 = 63744.


> any reasonable programmer

FTFY


No, it's exactly an unreasonable programmer who'd expect that, being conditioned to such ridiculous extravagancies by an inadequate language, which was the point of my another comment in this thread.

Multiplication of intX_t by intX_t should produce int2X_t which is btw is what actually happens in hardware! And detection of overflow of addition/multiplication in a sane language should not require ridiculous algebraic acrobatics that involve additional additions/multiplications (or a division, yeah, I've seen that too).


But in a (typed) language, types are constraints that must be respected (regardless of what hardware does), and integer precision (as specified in code) is one of them.


Sorry, what does "respect" means? Ruby and Pascal have division operator that takes two integers and returns a real — is this disrespectful? Is it disrespectful for multiplication to have signature

   template<size_t N> operator*(int<N>, int<N>) -> int<2*N>
instead of

   template<size_t N> operator*(int<N>, int<N>) -> int<min(numeric_limits<int>::width, N)>
? Why or why not?


Types exist for a reason, and one is the desire for consistency (or uniformity) of representation: say, I want to be able to store the result of an operation in the storage element which looks the same as those where the operands came from (think of an array, for example).


int-n * int-n -> int-n (n > 1) is fundamentally the wrong type logically for a multiplication, though, so something has to give. Sure, some languages have chosen for the arithmetic to be broken in order to maintain type, but that's certainly not clearly the right choice when others have chosen auto-promotion and most hardware has chosen a (limited selection of) int-n * int-n -> int-2n multiplication primitives to work with.


The rule int<N> * int<N> —> int<2*N> breaks one of important expectations from a type, which is the property on being “closed” with respect to certain operations. Even if an arithmetical operation increases the length of the result, it has to stop somewhere, so there’s little justification for such increase in the first place, if one is to be logical about this.


BigInts are closed under addition/subtraction/multiplication. Limited-size integers are not. That's "a truth that may hurt" and all programming languages have to deal with it. Most of them decided that using modular arithmetic and silently throwing away the most-significant digits is fine.

But again, all your arguing about "numbers should be closed under arithmetic operations just as they are in C/C++" is kinda pointless because in C/C++, they are not! When you add/multiply two signed char's, or two shorts, you get an int, not a signed char/short, so only ints and longs are actually closed under arithmetic operations. Whoops!


Nobody expects subtraction to be closed on the naturals, nor division on the integers. Why would be expect multiplication defined to be closed over fixed-width integers to coincide with the general arithmetic operation?


Well, by your definition even division of reals is not closed - which means that your definition is not a correct one. A closed operation is allowed to have an "undefined" result. (And, by the way, division of integers is perfectly closed, if you ignore the remainder.)


Division on the reals is indeed is not normally considered closed. The implied closure properties you want are not customary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: