I hope you noticed that IEEE 754 merged DFP support in 2008.... Which means its actually quite good at base 10 if you have a machine/compiler that supports it.
The alternative is a revision of IEEE 754 to include "decimal floating-Point arithmetic," which is apparently something "in progress" because the interests of those involved can't result in agreement. I'd really like to read anything from some insider. I know that IBM did the most of the work in that direction. Decimal Floating Point is the thing to have, but who know when that can happen in processors or in wider use. I know from Brendan Eich that there were attempts to add DFP in Javascript but that even there no agreement was achieved.
Edit: or it was standardized but still not gaining acceptance:
I've gone back and forth over the years. Having implemented some of the lowest-level floating point routines, taking care to handle all the corner cases, it definitely is a chore. But is there an alternative?
After all of this, I still think that IEEE 754 is pretty darn good. The fact that it is available and fast in pretty much all hardware is a major selling point.
That said, I absolutely hate -0 (minus zero). It's a stupid design wart that I've never seen any good use case for. But it is observable and therefore cannot be ignored, leading to even more icky special cases.
"IEEE754" is ubiquitous because it was the first non-platform-specific solution to a fundamentally hard engineering problem (how to approximate the entire real number line with a finite number of bits), and because it is supported in hardware (on the CPU itself), so that the representation of the number fits within the same registers as used for pointers, integers, and everything else, which makes it fast.
If you want base-10 encoding of non-integer values, and base-10 arithmetic on them, you're probably going to need a software library like [1], which implements the decimal math that is also in IEEE754, just not part of common hardware. It also may be that you actually want infinite-precision arithmetic, which definitely requires a library, because the representation of a number becomes its own dynamic data structure (like with a string).
Can you say more about what exactly is the "brokenness" that you're running up against?
>what are some examples of modern architectures that don’t use IEEE 754 floats
Mainstream? I can't think of one. Googling the question yields a stackoverflow question from 2010 that lists the 5 or 6 major architectures (x86, PowerPc, SPARC, Arm,.. ) and about a dozen more as supporting x86. So it's a fairly safe assumption to make.
One area where vanilla IEEE754 isn't used that immediately came to my mind is high performance ML at companies like facebook and Google, the generic range-precision tradeoff that FP offers is inefficient and they ended up designing several new formats with smaller sizes or larger ranges. Googling "Alternatives to FP in ML" or "Custom FP format <company-name>" yields a lot.
Another "niche" area is finance, insurance, taxes and anything nearby, fixed point in those area is, I hear, a must. Because all the usual invariants of high school algebra that FP breaks actually have significance. (imagine the well-known FP flaw "x+epsilon == x, with epsilon non-zero" playing out in these contexts.)
In any case, number portability in a language like C++ is a breeze. You design a myNumber class and overload all relevant operators and that's it. Compiler will make sure to inline the calls if they're a one-line simple floating point operations, but otherwise you can freely change the implementation of the class and the calculations never look different.
C/C++ is not required to use ieee754 as it's float format, so your code would always run under that cloud.
Fortran, D, Factor, and SBCL likely have better support.
But anything where a compiler can reorder things is suspect, if you need exact behavior, since even simple register spilling causes trouble (Intel), among other things compilers may do.
Not just the representation is much better than IEEE 754, which is awful (e.g. having negative zero and wasting lots of combinations on encoding NaN even though a single one would do).
And not just that, it seems they also standardized arithmetics?
Which is a big deal because IEEE 754 is unusable in heterogeneous distributed systems as every hardware implementation does something different.
If they shouldn't be IEEE 754 then what would you use instead? IEEE 754 exists because a lot of smart people put a lot of thought into what the best floating point representation should be. Then the hardware folks put a lot of work into making it fast and accurate.
Yeah and then we have the problem that most floating point variables shouldn't be IEEE 754. And we're mostly fucked because most languages don't support anything else.
IEEE 754 also provides for decimal radix floating point, just that there is no real hardware support for it beyond some IBM and Fujitsu chips, and what you can put together on FPGAs.
To be fair, if you're using floating point at all you can get arbitrarily wrong answers. The nice thing about ieee754 conformance is that you can, with a lot of expertise, somewhat reason about the kinds of error you're getting. But for code that wasn't written by someone skilled in numerical techniques, and that's the vast majority of fp code, is fast-math worse than the status quo?
reply