What I really want is memory order emulation. X86 as strong memory order guarantees, ARM has much weaker guarantees. Which means the multi-threaded queue I'm working on works all the time on development x86 machine even if I forget to put in the correct memory-order schematics, but it might or might not work on ARM (which is what my of my users have). (I am in the habit of running all my stress tests 1000 times before I'm willing to send them out, but that doesn't mean the code is correct, it means it works on x86 and passed my review which might miss something)
I wrote a similar post [1] some 16 years ago. My solution back then was to install Debian for PowerPC on QEMU using qemu-system-ppc.
But Hans's post uses user-mode emulation with qemu-mips, which avoids having to set up a whole big-endian system in QEMU. It is a very interesting approach I was unaware of. I'm pretty sure qemu-mips was available back in 2010, but I'm not sure if the gcc-mips-linux-gnu cross-compiler was readily available back then. I suspect my PPC-based solution might have been the only convenient way to solve this problem at the time.
Thanks for sharing it here. It was nice to go down memory lane and also learn a new way to solve the same problem.
> When programming, it is still important to write code that runs correctly on systems with either byte order
What you should do instead is write all your code so it is little-endian only, as the only relevant big-endian architecture is s390x, and if someone wants to run your code on s390x, they can afford a support contract.
As with many comments here: use a build-time assertion that the system is little-endian, and ignore it. Untested code is broken code.
I was at IBM when we gave up on big endian for Power. Too much new code assumed LE, and we switched, despite the insane engineering effort (though TBH, that effort had the side effect of retaining some absolutely first-class engineers a few more years).
There is one reason not mentioned in the article why it is worth testing code on big-endian systems – some bugs are more visible there than on little-endian systems. For example, accessing integer variable through pointer of wrong type (smaller size) often pass silently on little-endian (just ignoring higher bytes), while read/writ bad values on big-endian.
If you're worrying about the endianness of your processor, your code is somehow accessing memory from 'outside' as anything other than a char*, which is already thin ice as far as C and C++ are concerned. You should have a parse__le and/or parse__be function to convert from that byte stream into your native types, that only cares about the _endianness of the data_ (and they can be implemented without caring about your processor endianness as well). Then you don't need to worry about the processor you're running on at all. There's more significant and subtle processor quirks than endianness to worry about if you're trying to write portable code (namely, memory model and alignment requirements).
For most code it doesn't matter. It matters when you are writing files to be read by something else, or when sending data over a network. So make sure the places where those happen are thin shims that are easy to fix if it doesn't work. (that is done write data from everywhere, put a layer in place for this).
> When programming, it is still important to write code that runs correctly on systems with either byte order
I contend it's almost never important and almost nobody writing user software should bother with this. Certainly, people who didn't already know they needed big-endian should not start caring now because they read an article online. There are countless rare machines that your code doesn't run on--what's so special about big endian? The world is little endian now. Big endian chips aren't coming back. You are spending your own time on an effort that will never pay off. If big endian is really needed, IBM will pay you to write the s390x port and they will provide the machine.
This whole endianness issue can be traced to western civilization adopting Arabic numbers. Western languages are written left to right, but Arabic is right to left. Thus, Arabic numbers appear as big-endian when viewed in western languages. Consequently, big-endian appears to be "normal" for us in the modern age. But in Arabic, numbers appear little-endian because everything is right to left. Roman numbers are big-endian, though. Maybe that's why we kept the Arabic ordering even when adopting the system? We could have flipped Arabic numbers around and written them as little-endian, but we didn't.
128 comments
But Hans's post uses user-mode emulation with qemu-mips, which avoids having to set up a whole big-endian system in QEMU. It is a very interesting approach I was unaware of. I'm pretty sure qemu-mips was available back in 2010, but I'm not sure if the gcc-mips-linux-gnu cross-compiler was readily available back then. I suspect my PPC-based solution might have been the only convenient way to solve this problem at the time.
Thanks for sharing it here. It was nice to go down memory lane and also learn a new way to solve the same problem.
[1] https://susam.net/big-endian-on-little-endian.html
> When programming, it is still important to write code that runs correctly on systems with either byte order
What you should do instead is write all your code so it is little-endian only, as the only relevant big-endian architecture is s390x, and if someone wants to run your code on s390x, they can afford a support contract.
I was at IBM when we gave up on big endian for Power. Too much new code assumed LE, and we switched, despite the insane engineering effort (though TBH, that effort had the side effect of retaining some absolutely first-class engineers a few more years).
On Linux it's really as simple as installing QEMU binfmt and doing:
> When programming, it is still important to write code that runs correctly on systems with either byte order
I contend it's almost never important and almost nobody writing user software should bother with this. Certainly, people who didn't already know they needed big-endian should not start caring now because they read an article online. There are countless rare machines that your code doesn't run on--what's so special about big endian? The world is little endian now. Big endian chips aren't coming back. You are spending your own time on an effort that will never pay off. If big endian is really needed, IBM will pay you to write the s390x port and they will provide the machine.
presented at Embedded Linux Conf
Of course the endianness only matters to C programmers who take endless pleasure in casting raw data from external sources into structs.