The same trick (approximating with a first order Taylor series), is also the basis of quake's famous fast inverse square root [1].
I love these methods because they always involve a "magic constant".
In a similar vein, I can't go without linking to the Fast Inverse Square Root function. Similar kind of thing, a "machine-friendly" way to approximate a different mathematical function.
Fast inverse square root[0] is something that I encountered in the mid-2000s in the Q3A source code. It took me a really long time to understand it, and I eventually had to show it to some professors before I really understood what was going on and why this worked.
That's really an example of how arbitrary human thought processes are. When you release the constraint that your code has to have some human-comprehensible analog, you might arrive at interesting results.
I love the story of the fast inverse square root. A bizzare piece of code from quake 3 shows up on usenet with a magic constant that calculates the inverse square root faster than table lookups and approximately four times faster than regular floating point division. Inverse square roots are used to compute angles of incidence and reflection for lighting and shading in computer graphics. Author unknown but was once thought as of Carmack.
The Newton iteration is nicer for the inverse square root than for the square root. You can refine an initial approximation for `1 / sqrt(x)`, and multiply the result by `x` to compute an approximation of `sqrt(x)`. This less direct approach only needs FP multiplications and additions (and the initial approximation).
[1] https://en.m.wikipedia.org/wiki/Fast_inverse_square_root
reply