This is not a very good article, and I would not recommend it beyond simple use cases. The problem is that there is no right way, it depends on the usecase and the magnitude of numbers you're comparing. See e.g. https://bitbashing.io/comparing-floats.html as a better referenc.
The fundamental difficulty of comparing floats is that the format ensures a near constant number of digit of precision regardless of the scale. This is very useful for most calculations because it means you can calculate without worrying too much about the amplitude of your numbers. But it means that the smallest representable difference between two numbers gets bigger as numbers get bigger: that's why they are called floating point.
That's why using tolerance, etc. is not so reliable: because the tolerance will depend on the magnitude of the numbers, even by doing those simple tricks. In particular, it is important to understand epsilon is only "correct" around 1, i.e. a + eps != a is only true of a is close to 1. More precisely, epsilon is the smallest number such as 1 + eps != 1.
I mean, he's using relative tolerance, not absolute. Most common arithmetic operations should be well behaved w.r.t. relative tolerance, unless you're using denormalised numbers.
The fundamental difficulty of comparing floats is that the format ensures a near constant number of digit of precision regardless of the scale. This is very useful for most calculations because it means you can calculate without worrying too much about the amplitude of your numbers. But it means that the smallest representable difference between two numbers gets bigger as numbers get bigger: that's why they are called floating point.
That's why using tolerance, etc. is not so reliable: because the tolerance will depend on the magnitude of the numbers, even by doing those simple tricks. In particular, it is important to understand epsilon is only "correct" around 1, i.e. a + eps != a is only true of a is close to 1. More precisely, epsilon is the smallest number such as 1 + eps != 1.