For anyone turned off by this document and its proofs, I recommend Numerical Methods for Scientists and Engineers (Hamming). Still a math text, but more approachable.
The five key ideas from that book, enumerated by the author:
(1) the purpose of computing is insight, not numbers
(2) study families and relationships of methods, not individual algorithms
> This motto is often thought to mean that the numbers from a computing machine should be read and used, but there is much more to the motto. The choice of the particular formula, or algorithm, influences not only the computing but also how we are to understand the results when they are obtained. The way the computing progresses, the number of iterations it requires, or the spacing used by a formula, often sheds light on the problem...
Thus computing is, or at least should be, intimately bound up with both the source of the problem and the use that is going to be made of the answers-- it is not a step to be taken in isolation from reality
(From "An Essay on Numerical Methods" p 3 of the mentioned text; emphasis authors)
Not the OP, but I suspect it means focus on what questions are being asked first, and even then, look for opportunities to simplify wherever you find them.
So many of us spend so much time getting enamoured with technical solutions to problems that no one cares about.
Shared this because I was having fun thinking through floating point numbers the other day.
I worked through what fp6 (e3m2) would look like, doing manual additions and multiplications, showing cases where the operations are non-associative, etc. and then I wanted something more rigorous to read.
For anyone interested in floating point numbers, I highly recommend working through fp6 as an activity! Felt like I truly came away with a much deeper understanding of floats. Anything less than fp6 felt too simple/constrained, and anything more than fp6 felt like too much to write out by hand. For fp6 you can enumerate all 64 possible values on a small sheet of paper.
For anyone not (yet) interested in floating point numbers, I’d still recommend giving it a shot.
One thing that really did it for me was programming something where you would normally use floats (audio/DSP) on a platform where floats were abysmally slow. This forced me to explore Fixed-Point options which in turn forced me to explore what the differences to floats are.
Fixed point gave rise to the old programmers meme 'if you need floating point you don't understand your problem'. It's of course partially in jest but there is a grain of truth in it as well.
It is quite old, attributable to VonNeumann and Goldstine in 1947. Later Goldstine joked that if rescaling for every step was easy enough for Johnny it ought to be easy for everyone else.
The gag here being that perhaps that isn’t the best dividing line for programming talent.
It gets WORSE. Here's a quote from “The Birth of a Computer” in BYTE Magazine, February 1985, an interview with J.H. Wilkinson, noted numerical slouch, on the Manchester machines ca 1949 (p. 178):
>They were fixed point, but one of the earliest things that I did (at Turing’s request) was to program a set of subroutines for doing floating-point arithmetic.
So we ought to scale to better ourselves with self-study, meanwhile one of the first errands TURING send WILKINSON on was to rid themselves of this duty. ;)
It's interesting how many of these things we take for granted.
I'm working (and have been for a while) on something that requires both ridiculous precision and speed on a relatively puny power budget and it's been a really nice trip down memory lane regarding optimization. I discovered fixed point pretty early in my programming career when doing 3D graphics on the 6502. I never imagined that that knowledge would come in handy more than almost five decades later, but here we are.
Absolutely nobody will think this is 'clearer', this is a leaky abstraction and personally I think that the OP is right and == in combination with floating point constants should be limited to '0' and that's it.
We all know that 1/3 + 1/3 + 1/3 = 1, but 0.33 + 0.33 + 0.33 = 0.99. We're sufficiently used to decimal to know that 1/3 doesn't have a finite decimal representation. Decimal 1/10 doesn't have a finite binary representation, for the exact same reason that 1/3 doesn't have one in decimal — 3 is co-prime with 10, and 5 is co-prime with 2.
The only leaky abstraction here is our bias towards decimal. (Fun fact: "base 10" is meaningless, because every base calls itself base 10)
Storage, retrieval, transmission, and serialization/deserialization systems should be able to transmit and round-trip floats without losing any bits at all.
Well, there are many legitimate cases for using the equality operator. Insisting someone is doing something wrong is downright wrong and you shouldn't be teaching floating-point numbers.
A few use cases are: Floating-points differing from default or initial values and carrying meaning, e.g. 0 or 1 translates to omitting entire operations. Then there is also the case for measuring the tinyest possible variation when using relative tolerances are not what you want. Not exhaustive.
If you use == with fp, it only means you should've thought about it thoroughly.
There’s plenty of cases where ‘==‘ is correct. If you understand how floating point numbers work at the same depth you understand integers, then you may know the result of each side and know there’s zero error.
Anything to do “approximately close” is much slower, prone to even more subtle bugs (often trading less immediate bugs for much harder to find and fix bugs).
For example, I routinely make unit tests with inputs designed so answers are perfectly representable, so tests do bit exact compares, to ensure algorithms work as designed.
I’d rather teach students there’s subtlety here with some tradeoffs.
If they were taught what was representable and why they’d learn it quickly. And those that forget details later know to chase it down again if they need it. Making it voodoo hides that it’s learnable, deterministic, and useful to understand.
Tell them that they can only store integer powers of 2 and their sums exactly. 2^0 == 1. 2^-2 == .25. Then say it's the same with base 10. 10^-1 == 0.1. 1/9 isn't a power of 10, you you can't have an exact representation.
I have a linter in my code that shouts at me if I use exact equality for floats.
But I regret not making an exception for the constant zero, because it's one of the cases where you probably should accept it. I.e. if (f != 0.0) {...}
Zero shouldn't be an exception there. If f had been set from something like f = a - b, then you're in the same situation where f might be almost but not exactly zero.
The linter wouldn't know where f came from, so it should flag all floating point equality cases, and have some way that you can annotate it for "yeah this one is okay."
if (f == 0.0) means "is f exactly zero so it's not initialized" 99 times for every one time it means "is f zero-ish because of a cancellation/degeneracy/whatever"
I just found that I have now annotated it for "yeah this one is ok" about 100 times, and caught zero cases where I meant to do a comparison to zero-or-very-nearly-so but accidentally wrote == 0.0.
So my conclusion is: I would have had less noise in my code with that exception in the linter, and the linter had been equally useful.
The idea is not to do it with values derived from arithmetic, but e.g. from measurements where a real zero is very unlikely and indicates something different.
56 comments
What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=3808168 - April 2012 (3 comments)
What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=1982332 - Dec 2010 (14 comments)
What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=1746797 - Oct 2010 (2 comments)
Weekend project: What Every Programmer Should Know About FP Arithmetic - https://news.ycombinator.com/item?id=1257610 - April 2010 (9 comments)
What every computer scientist should know about floating-point arithmetic - https://news.ycombinator.com/item?id=687604 - July 2009 (2 comments)
The five key ideas from that book, enumerated by the author:
(1) the purpose of computing is insight, not numbers
(2) study families and relationships of methods, not individual algorithms
(3) roundoff error
(4) truncation error
(5) instability
> This motto is often thought to mean that the numbers from a computing machine should be read and used, but there is much more to the motto. The choice of the particular formula, or algorithm, influences not only the computing but also how we are to understand the results when they are obtained. The way the computing progresses, the number of iterations it requires, or the spacing used by a formula, often sheds light on the problem...
Thus computing is, or at least should be, intimately bound up with both the source of the problem and the use that is going to be made of the answers-- it is not a step to be taken in isolation from reality(From "An Essay on Numerical Methods" p 3 of the mentioned text; emphasis authors)
So many of us spend so much time getting enamoured with technical solutions to problems that no one cares about.
I worked through what fp6 (e3m2) would look like, doing manual additions and multiplications, showing cases where the operations are non-associative, etc. and then I wanted something more rigorous to read.
For anyone interested in floating point numbers, I highly recommend working through fp6 as an activity! Felt like I truly came away with a much deeper understanding of floats. Anything less than fp6 felt too simple/constrained, and anything more than fp6 felt like too much to write out by hand. For fp6 you can enumerate all 64 possible values on a small sheet of paper.
For anyone not (yet) interested in floating point numbers, I’d still recommend giving it a shot.
The gag here being that perhaps that isn’t the best dividing line for programming talent.
It's like slicing off the top 0.0001% of mt. Everest and saying that you have evenly split the world.
>They were fixed point, but one of the earliest things that I did (at Turing’s request) was to program a set of subroutines for doing floating-point arithmetic.
So we ought to scale to better ourselves with self-study, meanwhile one of the first errands TURING send WILKINSON on was to rid themselves of this duty. ;)
It's interesting how many of these things we take for granted.
I'm working (and have been for a while) on something that requires both ridiculous precision and speed on a relatively puny power budget and it's been a really nice trip down memory lane regarding optimization. I discovered fixed point pretty early in my programming career when doing 3D graphics on the 6502. I never imagined that that knowledge would come in handy more than almost five decades later, but here we are.
[0] https://www-sop.inria.fr/indes/fp/Bigloo/doc/r5rs-9.html#Num...
[1] https://www.deinprogramm.de/sperber/papers/numerical-tower.p...
==operator, they're doing something wrong.The only leaky abstraction here is our bias towards decimal. (Fun fact: "base 10" is meaningless, because every base calls itself base 10)
> Fun fact: "base 10" is meaningless, because every base calls itself base 10
Maybe we should name the bases by the largest digit they have, so that we are using base 9 most of the time.
> they (might) have a float, and are using the
==operator, they're doing something wrong.Storage, retrieval, transmission, and serialization/deserialization systems should be able to transmit and round-trip floats without losing any bits at all.
[1]: https://www.cs.uaf.edu/2011/fall/cs301/lecture/11_09_weird_f... (division result matrix)
Anything to do “approximately close” is much slower, prone to even more subtle bugs (often trading less immediate bugs for much harder to find and fix bugs).
For example, I routinely make unit tests with inputs designed so answers are perfectly representable, so tests do bit exact compares, to ensure algorithms work as designed.
I’d rather teach students there’s subtlety here with some tradeoffs.
You should be using == for floats when they're actually equal. 0.1 just isn't an actual number.
> Are you saying that my students should memorize which numbers are actual floats and which are not?
Yes.
1.25 = 2^0 + 2^-2, so is representable.
0.125 = 2^-3, so is representable
1.25 / 10.0 = 0.125 so is representable. 10.0 = 2^3 + 2^1.
1.25 * 0.1 is not representable, because 0.1 is not representable, and those low order bits show up in the multiplication
Of course, it might not be something you want to overload beginner programmers with.
But I regret not making an exception for the constant zero, because it's one of the cases where you probably should accept it. I.e. if (f != 0.0) {...}
The linter wouldn't know where f came from, so it should flag all floating point equality cases, and have some way that you can annotate it for "yeah this one is okay."
if (f == 0.0) means "is f exactly zero so it's not initialized" 99 times for every one time it means "is f zero-ish because of a cancellation/degeneracy/whatever"
I just found that I have now annotated it for "yeah this one is ok" about 100 times, and caught zero cases where I meant to do a comparison to zero-or-very-nearly-so but accidentally wrote == 0.0.
So my conclusion is: I would have had less noise in my code with that exception in the linter, and the linter had been equally useful.