Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

!tfel-ot-thgir eb dluow ,redro dleif sa llew sa ,sgnirts lla neht tuB

It could be argued that little endian is the more natural way to write numbers anyway, for both humans and computers. The positional numbering system came to the West via Arabic, after all.

Most of the confusion when reading hex dumps seems to arise from how the two nibbles of each byte being in the familiar left-to-right order clashes with the order of bytes in a larger number. Swap the nibbles, and you get "43 21", which would be almost as easy to read as "12 34".





Yep. We even have a free bit when writing hex numbers like 0x1234. Just flip that 0x to a 1x to indicate you are writing in little-endian and you get nice numbers like 1x4321 that are totally unambiguous little-endian hex representations.

You can apply that same formatting to little-endian bit representations by using 1b instead of 0b and you could even do decimal representations by prefixing with 1d.


For me I think the issue is the way you think of memory.

You can think of memory are a store of register sized values. Big endian sort of make some sense when you think of it that way.

Or you can think of it as arbitrarily sized data. It's arbitrary data then big endian is just a pain the ass. And code written to handle both big and little endian is obnoxious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: