> Why have we not improved real arithmetic since 1985?
Because most people actually want either fixed-point or floating-point arithmetic. Especially if you only consider the population who is willing to spend money to get better hardware to support their use cases.
>I know I'm not going to be dealing with lengths longer than 100km, then fixed point would be ideal for that.
Huge fallacy. No, you can not use fixed point numbers like that, it can not work. It is irrelevant what the actual maximum/minimum scale you are dealing with is. The thing which matters is the largest/smallest intermediate value you need. You need to consider every step in all your algorithms.
Imagine calculating the distance of two objects being 50km in x and 50km in y direction apart. Even though the input and output values fit within your range, the result is nonsensical if you use naive fix point arithmetic. Floating point allows you to write down the mathematical formula exactly, using fixed point arithmetic, you can not do that.
Looking at the maximum and minimum resolution you need is a huge fallacy when working with fixed point arithmetic and one big reason why everyone avoids using it. You need to carefully analyze your entire algorithm to use it.
>The impression I got from the commenter I was originally replying to was "fixed point is absolutely terrible and there is never a reason to prefer it over floats".
My position is that sometimes there might be a situation where fixed point arithmetic could be useful. If you are willing to put in a significant amount of time and effort into analyzing your system and dataflow it can be implememted successfully.
It is a far more complex and errorprone system and if you aren't careful it will bite you. In all cases floating point should be the default option and only deviated from if there are very good reasons.
> 2. Personally, though, I find the topic of numerical stability to be a little bit depressing, since it focuses on all the ways computers don't work!
Maybe a way to more positively reformulate this would be: There is no a priori reason to assume that floating point numbers are well behaved. The fact that we were able to come up with a structure so that it approximates real numbers adequately, that arithmetic operations on it are fast (which they aren't for infinite precision) and that, if we design the algorithms correctly, errors are well-behaved, is an astonishing feat of engineering.
> Fixed point numbers cause an enormous programmer overhead and do not fix the problems.
How so? Is that because of inherent problems with the outcomes of fixed-point arithmetic, or are they just clumsy to use and no-one's written a decent library that makes dealing with fixed-point numbers straightforward?
> you need to be extremely careful about overflows/underflows. [...] you need to ensure that every intermediate result of your operation gives an in range result.
How is that any different from ordinary integer operations/arithmetic with ints/longs/etc...?
> Also, fixed point arithmetic does not fix floats.
> I'd wager that for the majority of programs written, the fast math is as good as the accurate math.
I'd take that wager. I spent 13 years working on video games and video game technology, a domain where floating point performance is critical, and by and large we never used fast-math because of the problems it created for us.
> Arbitrary precision rationals increase in storage and processing time each time you multiply them
Only if you store the arbitrary precision number. Last I checked you can't actually have a tenth of a cent, so it has to be rounded at some point. Like when you store it. I was advocating calculation be done with arbitrary precision numbers which do not have this problem.
> Fixed-point arithmetic
This is a decent solution as long as you have enough precision and a strategy for detecting lost cents.
> Floating point arithmetic
Which is non-deterministic based off of optimization level and hardware. Waits for the bank to switch to a new cluster and watch all of their numbers change for the lols. No wonder banks don't know how much money they have.
> I want to add something here: great mathematicians compute too.
I totally agree.
To me, there are 2 kinds of algorithms that one should get used to. 1. is computing fast enough (that doesn't mean super fast, but sometimes it means hours instead of months), and 2. is spotting out errors, both evident and subtle.
This said, I'm certainly not in the set of great mathematicians.
Where is fixed point arithmetic resurging in HPC? Serious question; I haven't heard about this, but if it's happening in places I haven't heard about, I would be interested in looking at them.
> This is rather ugly for it is not closed by inversion.
Sure, which is why Scheme is better than Raku here.
> this view is rather subjective, and only valid if you find rational arithmetic more natural than floating point
No, it is an objective fact that arbitrary decimal literals can be represented precisely, and precise operations performed with those representations, in a rational representation but not binary floating point, and that binary floating point trades that precise representation of expressed values and capability of precise operations with those values off for space and performance optimizations.
> Because the calculation requires iteration (i.e., the answer cannot be found simply by plugging in the known quantities), I had thought this was impractical.
Since you know, computers are famously bad at iterating. (Hint: they are not)
>You have already got floating point inaccuracy on the input,
Not if they come in as integers that have to be coerced at some point.
>What is the library supposed to do?
Compare ad to bc instead of ad - bc to zero, for one.
>This is a great example of the fallacy of believing that a library automatically solves arbitrary issues.
I didn’t say that and don’t believe it. My point only depends on he library having avoided more domain-specific rookie mistakes than I would have caught in five minutes, when it comes to computational linear algebra. Like the sibling commenter, I don’t think libraries are a panacea, and the answer depends on the specifics of the case.
> What are the applications where someone would require floating point math AND both extreme speed and extreme accuracy?
Scientific computation.
I'm not even sure, it is a trade-off between (computational) speed and accuracy.
In my experience, error-analysis was a mandatory course for physics, but not in robotics.
The error analysis was manual and time-consuming.
In robotics, no one cared why it occasionally produced NaNs, etc.. people just added a small number here and there.
That's faster to program.
To put it more positively, I think, the view here is more having an application, which is robust in the face of errors / edge cases. What counts is the output (action).
While in science the correct model is part of the output. Edge cases are an important part of it.
>But numerical analysts really require being able to specify the semantics of calculations precisely in order to write libraries that mere mortals can use to get the approximate right answer.
>A quick perusal of any numerical analysis textbook should convince you that order of evaluation, order of truncation, overflow modes, and the like are very relevant for being able to write a library that has the right order of error.
I see your point. I've done more than peruse numerical analysis textbooks, and as an old-school CPU logic designer, I've forgotten more about implementing floating point than most programmers know to start with.
But your point is not in conflict with mine. Numerical analysis requires precise operation ordering. Most people not only don't care, they don't have sufficient understanding of the issues that you could even convince them to care.
> I did acknowledge that floating point numbers are faster
Yes you did. And then you said it’s their only advantage. It’s not the only one.
For example, in some environments, dynamic memory is undesirable. Especially since you mentioned physical sensors: many real-life sensors are not directly connected to PCs but instead are handled by MCU chips.
> I'm not talking about symbolic computation.
The numbers you’re talking about are pretty close to symbolics in many practical aspects. From Wikipedia article “Computable number”:
equivalent definitions can be given using µ-recursive functions, Turing machines, or ?-calculus
>a bad choice for most scientific computation
Is weird this. Scientific computation is at most a niche.
Financial and real-world decimal calculations is the norm.
That is why a lot of people could be server better by the use of decimals or fixed point.
reply