Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
0.1 and 0.2 Returns 0.30000000000000004 (2018) (qntm.org) similar stories update story
58.0 points by tingabing | karma 29 | avg karma 4.83 2021-04-11 19:12:20+00:00 | hide | past | favorite | 161 comments



view as:

Floating point considered harmful

Edit: this is not a blanket statement. It was meant in the context.


Misunderstanding floating point is more harmful than floating point itself.

Hence why it's harmful.

Floating point arithmetic powers everything from computer graphics to weather simulations. Seems rather silly to dismiss it as an anti-pattern.

Floating point is fine. Non-integers are inherently tricky to represent, especially when you have to pack it into 32 bits. You could maybe quibble with some of the decisions around NaN and denormals and things like that, but mostly IEEE-754 got it right. There's a reason it's been the standard for three and a half decades now, and it's served the computer industry very well.

Incidentally: the fact that 0.1+0.2 does not equal 0.3 is not something you could quibble with, that is absolutely reasonable for a floating point standard. It would be insanity to base it off of base 10 instead of base 2.


Of course it shouldn't be base 10. It should be base 60. That has more prime factors and wastes fewer bits than BCD does.

Or base 256 so that it fits a byte efficiently. Oh... wait...

I think one of the problems today is that floating point remains the default even for scripting languages like python. I'd wager that the huge majority of users of floating point arithmetic in their programs actually want correct math rather than efficient operations. This feels a bit like Random vs SecureRandom. So many people use the default and then accidentally shoot themselves in the foot. IMO, for scripting languages we should have infinite precision math by default and then be able to use floating point if you really care about speed.

What precise number representation do you want that's free of such "surprises"?

There exists no representation without "surprises" and it is easy to show why that must be.

The closest you can get is some systems like Matlab/Octave that specialize in working with numbers.


This is not a trivial decision because rational numbers can have an unbounded denominator and it can cause a serious performance problem. Python explicitly ruled rational numbers out due to this issue [1]. There are multiple possible answers:

- Keep rational numbers; you also have an unbounded integer so that's to be expected. (Well, unbounded integers are visible, while unbounded denominators are mostly invisble. For example it is very rare to explicitly test denominators.)

- Use decimal types with precision controlled in run time, like Python `decimal` module. (That's still inexact, also we need some heavy language and/or runtime support for varying precision.)

- Use interval arithmetic with ordinary floating point numbers. (This would be okay only if every user knows pitfalls of IA: for example, comparison no longer returns true or false but also "unsure". There may be some clever language constructs that can make this doable though.)

- You don't have real numbers, just integers. (I think this is actually okay for surprisingly large use cases, but not all.)

[1] https://python-history.blogspot.com/2009/02/early-language-d... (search for "anecdote")


Sure, every solution has problems. If there was an obvious alternative with little downside I think we'd have adopted it decades ago.

I just suspect that alternatives bite people less frequently than floating point does.


"People repeating stuff without understanding it considered harmful."

Floating point is extremely useful. Too bad so many people have no idea how and when to use it. Including some people that design programming languages.

Please, tell me, mister, how would you perform complex numerical calculations efficiently?

I guess we should just forget about drones and bunch other stuff because 90% of developers have no clue how to use FP?


> Please, tell me, mister, how would you perform complex numerical calculations efficiently?

If your calculation turned out to be incorrect it doesn't matter if it's efficient. Correct FP calculation requires error analysis, which is a concrete definition of "how to use it". If you mostly use packaged routines like LAPACK, then you don't exactly need FP; you need routines that internally use FP.


> Correct FP calculation requires error analysis

No, it does not. Please, don't make it seem harder than it needs to.

99% applications, if you don't do anything stupid you are completely fine.

If you care for precision so much the last digit make difference for you you are probably one of very few cases. I remember somebody giving an example circumference of solar system showing uncertainty of the value of Pi available as FP to cause couple centimeters of error at the orbit of Pluto, or something like that.

(Edit: found it: https://kottke.org/16/03/how-many-digits-of-pi-does-nasa-use)

Most of the time floating point input is already coming with its own error, you are just adding calculation error to the input uncertainty. But the calculation error is so much smaller than in most cases it is safe to ignore it.

For example, if you program a drone, you have readings from IMU which have not nearly the precision of the double or even float you will be working on.

There is also various techniques of ordering the operations to minimize the resulting error. If you are aware which kinds of operations in which situations can cause huge resulting error it is usually very easy to avoid it.

Only very special case is if you try to subtract two values calculated separately and matching almost exactly. This should be avoided.


> 99% applications, if you don't do anything stupid you are completely fine.

This is a common misconception. Or "anything stupid" is quite broader than you think.

The main issue with FP arithmetic is that effectively every calculation incurs an error. You seem to aware of catastrophic cancellation, but it is problematic not because of the cancellation but because the error is proportional to the input, which can wildly vary after the cancellation. Non-catastrophic cancellation can still cause big errors in a long run for a large range of magnitudes. A long-running process thus requires some guard against FP errors, even if it's just a reality check (like, every object in the game should be in a skybox).

Alternatively you do not run a long-running process. I don't think you fly a drone for days? Probably you won't fly a drone that long even without FP, but using FP does prohibit many things and thus affects your decision. For example game simulations tend to avoid FP because it can accumulate errors and it is also hard to do FP calculation in a repeatable way (in typical environments). If I chose to use FP for simulations I effectively give up running simulations both in servers and clients---that kind of things.


I work on financial risk calculations during day and embedded control devices after my official work hours.

My controller running moving horizon estimator to simulate thermal system of the espresso machine and Kalman filters to correct model parameters against measurements runs 50 times a second and works fine for days, thank you, using floats on STM32.

I have written huge amount of embedded software over past 20 years and I am yet to do any error analysis with focus on actual floating point inaccuracy. Never saw any reduced performance of any of my devices due to FP inaccuracy. That's because I know what kind of things can cause those artifacts and I just don't do that stupid shit.

What you describe are just no-noes you should not be doing in software.

There exists huge amount of software that works for years at a time, heavy in FP math, and it works completely fine.

If drones were not limited in flight time they would also work fine because what you describe is just stupid errors and not a necessity when using FP math.

Game simulations don't use FP mostly because it is expensive. Fixed point is still faster and you can approximate results of most calculations or use look up tables to get the result that is just fine.

Some games (notably Quake 1 on the source code which I worked after it became available) do require that the simulation results between client and server are in lockstep but this is in no way necessity. Requiring lockstep between client and server causes lag.

What newer games do, they allow client and server run simulations independently even when they don't agree exactly, and then reconcile the results from time to time.

If you ever saw your game partner "stutter" in flight, that was exactly what happened -- the client and the server coming to an agreement on what is actual reality and moving stuff into its right place.

In that light, there is absolutely no problem if after couple of frames the object differs from official position by 1 tenth of a pixel, it is going to get corrected soon.

Go do your "error analysis" if you feel so compelled, but don't be telling people this is necessary or the software isn't going to work. Because that's just stupid. Maybe NASA does this. You don't need to.


Maybe I should have directly mentioned that "some guard against FP errors" includes a generic guard against errors. If your system processes messy data then you probably want some guards anyway and FP errors are just one (relatively small) class of errors. In fact I would be more worried if embedded devices don't have such guards than if they do faulty FP calculations.

There are also many classes of softwares where FP inaccuracy is simply no problem---such as 3D rendering where the worst FP error is probably z-fighting. Not every application of FP requires "correct" calculation.

> What newer games do, they allow client and server run simulations independently even when they don't agree exactly, and then reconcile the results from time to time.

That's not what "newer" games do, it's what games that can withstand the trade-off do. In particular if the outcome of a particular action is crucial to players you can't reconcile that (projectiles are canonical examples); once the player observed the outcome it's done. Reconcilation thus typically happens in the animation level, so that it can hide the delay until the outcome is observed. This does not actually require the simulation---an simple interpolation is frequent AFAIK---so your point is irrelevant.

There are several genres of games where the lockstep simulation remains pretty much the only way, and thus FP is discouraged. Not because FP is expensive, which had been false for quite a long time anyway (no game developer uses the "fast" inverse square root algorithm any more).


Floating point works just fine for lockstep simulation!

What you need to avoid using is special functions that can vary based on your CPU model. Once you do that, integer lockstep and floating point lockstep are of similar difficulty.


> if you don't do anything stupid you are completely fine

In this case, "doing anything stupid" includes equality checks. This is the sort of footgun that a huge number of people are going to run into.


What makes you think that everyone around is concerned with efficiency? With FP as the only syntactically sound number system in a language, you basically lose a == operator, as well as an ability to check if your job was finished. Or you use ints and carry fixed points around, which is viable only in few types of languages with operator overloading, like c++. Edit: even when using FP you have to carry initial precisions around, like in:

  var n = get_n() // valid to .5g
  n = transform(n)
  ...
  <input value={n.toFixed(5)}> // 5 carried over here
You can’t even infer the precision from a FP number alone, especially if it is close to log10(53). /edit

In a proper-numbers lang, if someone needed FP numbers, they could just 0.1f. Otherwise 0.1 would mean just that, and counting by 0.1+rand(100) from 1000000 to 0 would not make you scratch your head at the end of the loop and worry whether the rest is just a FP error or an algorithmic error which must be fixed.

90% of developers who know how to use FP still hate it in non-FP tasks, because there is no 0.1nobs literal, how about that.


Fixed point doesn't do any better than floating point at holding infinite fractions. It won't solve this problem at all.

(While decimal fixed point does solve this particular problem, the part that solves it is the decimal, not the fixed point. Decimal floating point is equally capable of adding 0.1 to 0.2)


Not so much floating point as “using float point type for exact decimal literals”.

Fair...

Not at all. For all it's faults, floating point is incredibly fast. It's not some convenience hack that we lazy programmers have come up with, it's an incredibly quick way to do numerical computation. It will always have it's place (sometimes even in finance to represent money, to many people's shock).

To add to that, in modern processors FP calculation is faster than integer calculation, both in terms of latency and throughput (as long as you don't hit subnormal numbers). This is very unintuitive and mostly due to disproportional demands.


God I love that the Internet does things like this. Thanks, this put a smile on my face today.

Their summary of Mysql 5.6 (https://0.30000000000000004.com/#mysql) isn't telling the whole story.

"SELECT .1 + .2;" does return 0.3

However,

  CREATE TABLE t1 (f FLOAT);
  INSERT INTO t1 VALUES(0.1),(0.2);
  SELECT SUM(f) FROM t1;
  // returns 0.30000000447034836
Which feels odd to me.

http://sqlfiddle.com/#!9/2e75e/3


You're at 32-bit precision there.

Not in Raku it doesn't!

    > 1.1 + 2.2
    3.3
    > 1.1 + 2.2 == 3.3
    True
EDIT: to be clear: this is not because Raku is magic, it's because Raku defaults to a rational number type for decimal literals, which is arguably a much better choice for a language like Raku.

Ahem, this is about 0.1 + 0.2. I think Raku also uses IEEE 754 double precision floating point numbers for the Num type, no?

Edit: it seems that Raku uses rationals as a default [1], so it doesn't suffer from the same problem by default.

[1]: https://0.30000000000000004.com/#raku


Oh, missed that, it's usually 1.1 + 2.2 in these kinds of discussions.

Yeah, exactly, Raku defaults to a rational number type for these kinds of numbers. I honestly think that is a perfectly fine way to do it, you're not using Raku for high performance stuff anyway. It's not so different from how Python will start to use arbitrarily sized integers if it feels it needs to.

Raku by default will convert it to a float if the denominator gets larger than a 64-bit int, but there's actually a current pull request active that lets you customize that behavior to always keep it as a Rat.

Really interesting language, Raku!


The same goes in Common Lisp, but for very different reasons:

  * (= (+ 0.1 0.2) 0.3)
  T
In Common Lisp, there is a small epsilon used in floating-point equality: single-float-epsilon. When two numbers are within that delta, they are considered equal.

Meanwhile, in Rakudo, 0.1 is a Rat: a rational number where the numerator and denominator are computed.

You can actually get the same underlying behavior in Common Lisp:

  (= (+ 1/10 2/10) 3/10)
Sadly, not many recent languages have defaults as nice as those. Another example is Julia:

  julia> 1//10 + 2//10 == 3//10
  true
IMO, numerical computations should be correct by default, and fast in opt-in.

Because that syntax in raku uses rational type, which fails for many other uses, and by using the syntax most languages use for a floating type, makes it harder to spot these issues, just like here. For example,

    0.1e0 + 0.2e0 
yields 0.30000000000000004. Your example also fails

    1.1e0 + 2.2e0 == 3.3e0
returns false.

I mean, yeah, if you force the numbers to be floats, then of course it's going to fail. I personally think Raku's way of defaulting to floats is the better way to go for a scripting language like this, and I disagree that "it fails for many other uses". It works just fine (like, it doesn't break if you pass it to sqrt() or whatever), it's just less performant. It's the exact same kind of tradeoff that Python's implicit promotion to big integers make.

>I disagree that "it fails for many other uses"

Having developed a significant amount of perfect precision math libs over the years, rationals do explode for lots of common computations. That is the main reason they are not standard in all computing. They also cannot represent a lot of desired results.

The problem is rational number performance slows exponentially (or uses large amounts of RAM) for many common uses, which will kill scripts, unless they suddenly fix precision (i.e., no longer exact) or change to float (also a surprise for people).

Setting them as floats has the odd numerical issue, which is not that bad, but doesn't require a host of other mitigations to prevent other bad surprises.

For example, summing 1/n^2 for n=1 to 100000 as floats runs quickly and is very close to the exact answer. As rationals the numerator and denominator require working with 86000 digit numbers.

Also, does raku still do this?

say (1/6+1/6).raku #<1/3>

say (1/10+1/10).raku #0.2

That seems surprising, does it not?


say (1/10+1/10).WHAT; say (1/6+1/6).WHAT;

both are Rats, they just get stringified differently


They stringify differently because 2/10 can be expressed exactly in NNN.MMM format, whereas 1/3 can not.

Yes, that’s precisely what I am trying to say. With FatRat (aka BigInt) by default then the casual programmer has to know to step explicitly into Num (eg by 1e-1 + 2e-1 which needs a bit of a scientist mindset) or will never benefit from all those lovely FPU transistors.

[2018]

I've seen a lot of stuff on getting the shortest representation that is equal to the floating point value back but what about finding the minimum/maximum representation that is equal to a given value?

That's a rather easier problem in comparison. Just use the nextafter function in the standard library to figure out the next representable number. Then try not to exceed half of the difference using string processing.

Ah "nextafter" is indeed what I was looking for it just isn't in the JS standard library or Python version I use. Google has plenty examples of the function once you know what it's called though.

Complexity wise that actually seems to give an equally simple "shortest answer" method - nextafter up and down and using text processing find the first digit that changes, see if it can be zero, if not choose the lowest value it can be an increment by one, remove the rest of the string accordingly, and right trim any 0s from the resulting.


This is pretty much the same problem as what does 1/3 + 1/3 = in decimal. You are specifying fractions that don’t have an exact finite representation in that base (base 10 with 1/3) and (base 2 with 0.1 or 1/10).

With proper rounding and I/O these are not generally an issue.


This just has to do with printing.

  This is the TXR Lisp interactive listener of TXR 256.
  Quit with :quit or Ctrl-D on an empty line. Ctrl-X ? for cheatsheet.
  TXR works even if the application surface is not free of dirt and grease.
  1> (+ 0.1 0.2)
  0.3
OK, so then:

  2> (set *print-flo-precision* 17)
  17
  3> (+ 0.1 0.2)
  0.30000000000000004
But:

  4> 0.1
  0.10000000000000001
  5> 0.2
  0.20000000000000001
  6> 0.3
  0.29999999999999999

I.e. 0.1 isn't exactly 0.1 and 0.2 isn't exactly 0.2 in the first place! The misleading action is to compare the input notation of 0.1 and 0.2 to the printed output of the sum, rather than consistently compare nothing but values printed using the same precision.

The IEEE double format can store 15 decimal digits of precision such that all those decimal digits are recoverable. If we print values to no more than 15 digits, then things look "artificially clean" for situations like (+ 0.1 0.2).

I made *print-flo-precision* have an initial value of 15 for this reason.

The 64 bit double gives us 0.1, 0.2 and 0.3 to 15 digits of precision. If we round at that many digits, we don't see the trailing junk of representational error.

Unfortunately, to 15 digits of precision, the data type gives us two different 0.3's: the 0.299999... one and the 0.3.....04 one. Thus:

  7> (= (+ 0.1 0.2) 0.3)
  nil
That's the real kicker; not so much the printing. This representational issue bites you regardless of what precision you print with and is the reason why there are situations in which you cannot compare floating-point values exactly.

> The misleading action is to compare the input notation of 0.1 and 0.2 to the printed output of the sum, rather than consistently compare nothing but values printed using the same precision.

I think the problem is the act of caring for the least significant bits.

If you care for least significant bits of a floating point number it means you are doing something wrong. FP numbers should be treated as approximations.

More specifically, the problem above is assuming that floating point addition is associative to the point of giving you results that you can compare. In floating point order of operations matters for the least significant bits.

FP operations should be treated as incurring inherent error on each operation.

IEEE standard is there to make it easier to do repeatable calculations (for example be able to find regression in your code, compare against another implementation) and for you to be able to reason about the magnitude of the error.


Problem is, most people think that 0.3 being an approximation refers to the fact that it's one significant figure measurement of some sort, like 0.3V on a multimeter. Not that it's inherently an approximation.

Pencil-and-paper floating-point numbers like 1.23 x 10^5 are approximations of measurements (if we are doing science or engineering), but are inherently exact. Calculators bear that out, because calculators use base 10 floating-point, like pencil-and-paper calculations.

0.3 being inexact is only an artifact of the floating-point system being in a different base. No matter how many digits we throw at it, we cannot represent 0.3 in binary floating point. Not 64 bits, not 1024 bits, not 65535 bits.

If we use binary notation for floating-point numbers, they likewise become exact, in terms of representation. The inexactness we deal with then is the familiar type that we know from pencil-and-paper calculations: truncation to a certain number of digits after performing an operation like addition or multiplication.

But that truncation will not happen in a calculation in which both input operands are exactly represented, and the result is also exactly representable!!!

If base ten were used, 0.1 + 0.2 would be 0.3, exactly.

If we use power-of-two values, and combinations thereof, we don't have the problem:

  1> (= 0.625 (+ 0.125 0.5))
  t
No problem.

  2> (set *print-flo-precision* 17)
  17
  3> 0.625
  0.625
  4> 0.5
  0.5
  5> 0.125
  0.125
  6> (+ 0.125 0.5)
  0.625
No junk digits.

  7> (= 0.25 (sqrt 0.0625))
  t
Wee ...

Indeed, and therefore:

  0.1 + 0.2 != 0.3
You can check it in the JavaScript console.

This actually makes me wonder if anyone's ever attempted a floating-point representation that builds in an error range, and correctly propagated/amplified error over operations.

E.g. a simple operation like "1 / 10" (to generate 0.1) would be stored not as a single floating-point value, but really as the range between the closest representation greater than and less than it. The same with "2 / 10", and then when asking if 0.1 + 0.2 == 0.3, it would find an overlap in ranges between the left-hand and right-hand sides and return true. Every floating-point operation would then take and return these ranges.

Then floating point arithmetic could be used to actually reliably test equality without ever generating false negatives. And if you examined the result of calculation of 10,000 operations, you'd also be able to get a sense of how off it might maximally be.

I've search online and can't find anything like it, though maybe I'm missing an important keyword.


But what you would actually get is something like this:

   x---0.1 + 0.2 ---x
      x---0.3---x
That is, the range of 0.1 + 0.2 would be wider than the range of 0.3. And now what do you do? There is overlap, so are they equal? But there are parts that don't overlap, so are they different?

Make equality checks illegal, and instead define specific operations for contains and overlaps.

Well right now you basically can't ever check for equality with floating-point arithmetic and trust that two numbers that should intuitively be equal are reported as equal.

For me, floating-point equality would be if there are any parts that overlap. Basically "=" would mean "to the extent of the floating-point accuracy of this system, these values could be equal".

If you're doing a reasonably limited number of operations with values reasonably larger than the error range, then it would meet a lot of purposes -- you can add 0.5 somewhere in your code, subtract 0.5 elsewhere, and still rely on the value being equal to the original.


That sounds interesting, but I would imagine it would become very complicated once you start applying nontrivial functions (discontinuous functions, for example). In that case the range of possible values could actually become discontinuous. I would imagine accounting for that is actually more computationally expensive than just using arbitrary precision decimals.

Yeah, you call tan() on that number, and suddenly your interval is like most of the number line. Actually, you don't even have to be that fancy: if the number is close to epsilon, the error bars on 1/x would be huge.

But isn't that a feature, rather than a bug? It prevents you from getting "false accuracy".

Yes, that's a feature. If you're using interval arithmetic and your result is an unreasonably large interval, then there's a good chance the algorithm in use is numerically unstable.

Sure, but what's the use case for mathematics where you don't know what side of an asymptote you're on?

>> if the number is close to epsilon, the error bars on 1/x would be huge.

> Sure, but what's the use case for mathematics where you don't know what side of an asymptote you're on?

Knowing which side of the asymptote you're on does not solve this problem or even ameliorate it.


That's not what I'm saying - it seems to me to be programmer error that such a question is being asked (an exception should be thrown regardless, since the result is not of a usable quality).


Interval arithmetic is what you're looking for, and there's an IEEE standard and many implementations.

Thank you!!

Yes, that turns out to be exactly it [1]. Looks like there's even at least one JavaScript library for it [2].

It seems like such a useful and intuitive idea I have to wonder why it isn't a primitive in any of the common programming languages.

[1] https://en.wikipedia.org/wiki/Interval_arithmetic

[2] https://github.com/mauriciopoppe/interval-arithmetic


Interval arithmetic certainly has its place. However, you don't find it used more often because a naive implementation results in intervals that are often uselessly huge.

Consider x in [-1, 1], and y in [-1, 1]. x*y is also in [-1,1], and x-y in [-2, 2]. But now consider actually that y=x. That's consistent, but our intervals could be smaller than what we've computed.


Sure, but wouldn't realistic intervals be more like x in [0.29999999999999996, 0.30000000000000004]?

I mean, intervals as large as whole numbers might make sense if your calculations are dealing with values in the trillions and beyond... but isn't the point of interval arithmetic to deal with the usually tiny errors that occur in FP representation?


> It seems like such a useful and intuitive idea I have to wonder why it isn't a primitive in any of the common programming languages.

It is basically useless for numerical computation when you perform iterations. Even good convergent algorithms can diverge with interval arithmetic. As you accumulate operations on the same numbers, their intervals become larger and larger, growing eventually to infinity. It has some applications, but they are quite niche.


It is true that error intervals eventually go to infinity, but the very point is to keep them small enough to be useful throughout the calculation. IA is pretty bad for that but other approaches like affine arithmetic can be much better. (It still doesn't make an approachable interface for general programming languages though.)

But if the intervals are growing to infinity, then should you be trusting your result at all?

Are there really cases where current FP arithmetic gives an accurate result, but where the error bounds of interval arithmetic would grow astronomically?

It seems like you'd have to trust FP rounding to always cancel itself out in the long run instead of potentially accumulating more and more bias with each iteration. Is that the case?

Wouldn't the "niche" case be the opposite -- that interval arithmetic is the general-purpose safe choice, while FP algorithms without it should be reserved for those which have been mathematically proven not to accumulate FP error? (And would ideally output their own bespoke, proven, interval?)


> But if the intervals are growing to infinity, then should you be trusting your result at all?

Most often, yes; the probability distribution of your number inside that interval is not uniform, it is most likely very concentrated around a specific number inside the interval, not necessarily its center. After a few million iterations, the probability of the correct number being close to the boundary of the interval is smaller than the probability of all your atoms suddenly rearranging themselves into an exact copy of Julius Caesar. According to the laws of physics, this probability is strictly larger than zero. Would you think it "unsafe" to ignore the likelihood of this event? I'm sure you wouldn't, yet it is certainly possible. Just like the correct number being near the boundaries of interval arithmetic.

Meanwhile, the computation using classical floating point, typically produces a value that is effectively very close to the exact solution.

> It seems like you'd have to trust FP rounding to always cancel itself out in the long run instead of potentially accumulating more and more bias with each iteration. Is that the case?

The whole subject of numerical analysis deals with this very problem. It is extremely well known which kinds of algorithms can you trust and which are dangerous (the so-called ill-conditioned algorithms).


I suggest adding an example to demonstrate this effect because I don't think it's necessarily obvious for someone who hasn't already seen it.

With large integers - no, ie:

`Math.pow(2, 55)+1 === Math.pow(2, 55)` returning true


There was a time when I thought (like you) that everybody should be using interval arithmetic, but then I came across a counterexample that convinced me I was wrong. I don't remember the precise example, but maybe the following will do the same for you.

Say x = 4.0 ± 1.0. What is x / x?

It should be x / x = 1.0 ± 0.0, but interval arithmetic will give you [3/5, 5/3].

Notice the interval is objectively wrong, as the result cannot be anything other than 1.0. Now imagine what happens if you do this a few more iterations. Your interval will diverge to (0, +8), becoming useless.

The moral of the story (which may be more obvious in hindsight): interval arithmetic is a local operation; error analysis is a global operation. Naturally the former cannot substitute for the latter.


Your example only proves your point if every instance of x is the same x, with the same objective value. i.e., what if x/x is actually (x=5.0)/(x=3.0) ?

Then you're computing x/y and not x/x...

x can't be two values at the same time. It's one instance, not a class.

If you have "scope1.x/scope2.x" then they don't cancel out, but that's not "x/x"

And if you save a value for later, and change x, then that value isn't x anymore.

(I assume you're using = to state the value, not as an assignment operator.)


> Say x = 4.0 ± 1.0. What is x / x?

> It should be x / x = 1.0 ± 0.0, but interval arithmetic will give you [3/5, 5/3].

This is the “dependency problem” which is eliminated in many cases, and mitigated in others, by rewriting so identical (vs. merely in components) values appear only once; when you might need a numerical answer at one point (if possible) but to use the value in further computations, this can be done by storing and manipulating values symbolically and extracting numerical results as needed.


Well, if it's built into the language and the compiler or interpreter knows both x's are the same variable it could optimize it away to 1.0 ± 0.0, so I don't see the problem. Or if the library works on objects passed by reference and can confirm them as the same object, it could do that too.

But if, in your code, you've copied x to y (not by reference), then it seems that x / y would correctly be [3/5, 5/3]. This is a feature, not a bug.

In any case, since intervals are more likely to be more like ± 0.000000000000001 when dealing with basic common non-iterative calculations, it doesn't seem like a problem in practice even if the compiler/interpreter doesn't optimize it away?


> and the compiler or interpreter knows both x's are the same variable it could optimize it away to 1.0 ± 0.0, so I don't see the problem

You're moving the goalposts here. Those are HUGE if's. You went from something you could trivially implement in any language to something that requires a LOT of infrastructure and will severely limit your options.

How are you going to handle (x - z) / (x + z) when z = 0? How are you going to handle f(x) / g(x) when they turn out to compute the same value in different ways?

You go from "I need to change float to interval<float>, give me 15 minutes" to "I need a computer algebra system, let me figure out if I can embed Mathematica/SymPy/Maple/etc. into program so I can do math." And even when you do that you STILL won't be able to handle cases where the symbolic engine can't simplify it for you. Which in general it won't be able to do.

> But if, in your code, you've copied x to y (not by reference), then it seems that x / y would correctly be [3/5, 5/3]. This is a feature, not a bug.

No, that is most definitely a bug. The correct result is 1.0, but you're producing [3/5, 5/3]. If that's not a bug to you then you might as well just output (-8, +8) everywhere and call it a day. You can insist on calling it a "feature" if it makes you feel better, but that won't change the fact that it's still just as useless (if not actively harmful) for your intended calculation as it was before.

Contrast this with just leaving it as a float instead of an interval, where you would've gotten the correct answer.

> In any case, since intervals are more likely to be more like ± 0.000000000000001, it doesn't seem like a problem in practice even if the compiler/interpreter doesn't optimize it away?

That's only after 1 iteration. Notice that in my example the error was multiplicative, not additive. Run more iterations and your error will magnify.


I wasn't moving the goalposts, I was simply responding to your specific example.

But there's nothing "wrong" about it, it's correct -- that's how it's designed to work. If you're starting with non-extreme values and just miniscule floating-point errors and iterate 1,000 times with basic arithmetic I still don't see how it's going to cause a problem. E.g.:

  Math.pow((1 + Number.EPSILON), 1000) => 1.000000000000222
The result is the same even if you multiply in a loop 1,000 times rather than call Math.pow().

If you're multiplying a million or billion times then that's where I can now understand you should be an expert an numerical analysis in the first place to have any confidence in your result, and you know whether or not your float result can be trusted or not at all.

But the good thing is that if you don't know what you're doing, started with interval arithmetic and it ballooned to (-8, +8), then that's a strong signal you shouldn't necessarily be trusting the algorithm's results at all, and to go talk to someone with a background in numerical analysis, right?

Whereas if you iterate a million times and the interval is still miniscule compared to your values, you have absolute confidence you're fine. Seems useful to me -- not useless or actively harmful at all.


No, but I'm out of ideas for how to explain it any more clearly. If you really think this is all correct behavior and IA is going to make your life better, start using it in production and you'll learn it the hard way.

I guess I just don't see why it's worse than FP. I mean, you said:

> Contrast this with just leaving it as a float instead of an interval, where you would've gotten the correct answer.

But if I type into my console:

  var a = 0.1; var b = a + 0.2; b -= 0.2; b == a;
I get false. That's not correct. Whereas if "==" checked for overlap between intervals, it would be true. Which would be correct.

Heck, to use your own division example:

  (0.1 + 0.2 - 0.2) / 0.1 ==> 1.0000000000000002
That's not 1.0. So the FP you claim is so correct... just isn't at all. Again, while interval arithmetic would correctly detect overlap of intervals between that and 1.0.

Anyways, thanks for trying to explain.


I think you run into the opposite issue, where equality is too likely and nonequal values return as equal. There might be a sweet spot with something like

(0.1 + 0.2 - 0.2) / 0.1 ==[+/- .001] 1.0 => True


Turns out Wikipedia has a decent article on this; you might want to check it out: https://en.wikipedia.org/wiki/Interval_arithmetic#Dependency...

Ah, thanks for that, I finally understand what you were getting at. I see now what you mean about intervals growing larger than they actually need to in formulas that use the same variable more than once, that is an unfortunate drawback.

Now you've got me learning about how affine arithmetic can help with the dependency problem but has its own drawbacks. Thanks for pointing me in that direction!


> If you really think this is all correct behavior and IA is going to make your life better, start using it in production and you'll learn it the hard way.

Even if this may come out as a bit snarky, this is a very good, honest, and helpful answer. At least, it would be helpful to extremely skeptical people like me. I'm a bit like crazygringo, and I cannot be convinced of anything until I have tried it myself and dirtied my hands with it.

To crazygringo: for a concrete, real-world example, try to implement a Kalman filter (a very simple numerical algorithm) using interval arithmetic. It simply doesn't work. You'll see that you cannot extract any useful result from the output of the computation. The implementation of an "interval Kalman filtering" is a (niche) subject of current research, where various very complicated algorithms are being invented to try to reproduce the nice properties of Kalman filtering--even when using something obviously inappropriate like interval arithmetic. For an intuitive understanding, assume that the input data are quantized to integer values, so that the starting intervals are of length 1.


This feels like the central limit theorem. Accumulating a bunch of random variables is going to produce a normal distribution, and a normal distribution has an infinite interval.

Yeah, but the standard deviation goes by sqrt(n) for many operations, and there is significant autocorrelation. Interval arithmetic will give you worst-case bounds, which will quickly get fantastically pessimistic.

Numerical analysis was a field invented to give realistic error bounds.


Currently, either a programmer understands that floating point numbers are complicated beasts, or they don't, and get surprised.

With interval arithmetic, either a programmer would understand that floating point numbers are not actually numbers but intervals... or they wouldn't, and get surprised.

So I don't really see much upside. If you know that you need interval arithmetic, chances are that you're already using it.


Another way to get a feeling for the error (simpler than fully fledged interval arithmetic) is to toggle the rounding rules from the usual (towards even, or so) to up and down, and observe the change in result.

https://en.wikipedia.org/wiki/IEEE_754#Rounding_rules


That is overthinking it horribly.

The problem is that the user wants to write 1/10 and 2/20 and 3/10 but those numbers aren't really in the binary system.

The user gets some numbers (let's call them A, B and C) that aren't the same but they fool people at first because they not only deserialize as 0.1 but the they also serialize from 0.1. Trouble is that A + B != C but some other number.

Excel tries to hide it but the real answer is to keep the exponent in base 10 if you plan to read and write numbers like 137.036 or 9.1E-31. How the mantissa is doesn't matter, it could be base 7 for all I care -- it is just an integer.

Interval math is for much tougher problems like recursion of

  k * x * (1-x)
is easily proven to have periodic orbits of infinitely long period, but if you are using 32-bit floats you can't have a period longer than 4 billion. That kind of qualitatively difference means that there's no scientific value in iterating that function with floats, although you can do accurate grid samples with interval arithmetic.

You cannot compare floating point numbers like that.

The equality test in floating point numbers is comparing against the epsilon.

    Math.abs(0.3 - (0.1 + 0.2)) < Number.EPSILON
Which is the same you other languages.

Using the epsilon for comparison is not mentioned in the article. Floating point absorption is also not mentioned in the article.

This entire discussion and the fact this is on the front page of HN is pretty disappointing and sad.

Is this really a surprise for you? if it is... have you ever implemented any logic involving currency? You may want to take another look at it.


> have you ever implemented any logic involving currency? You may want to take another look at it.

Floating point arithmetic is good enough for science, should be good enough for commerce too, no? Why is commerce special?


rounding error after multiple operations can be more than just epsilon

That's why there are algorithms to do those multiple operations.

Well, that was trivially easy to disprove with a little jiggling of numbers in the console:

  Math.abs(1.8 - (0.1 + 0.2 + 0.9 + 0.6)) < Number.EPSILON
returns false.

Also, you generally really shouldn't be implementing any currency logic using floating point numbers, yikes. Stick to integers that represent the value in cents, or tenths of cents, or similar. Or, even better, a DECIMAL data type if your platform supports it.

I genuinely hope you've never written financial software that judges if the results of two calculations are equal via the method you've described.


I never said you should use floating-point numbers for currency. I said that if you are not aware of the shortcomings of floating-point numbers (the only numeric type in JavaScript, not counting workarounds like typed arrays) you should not be working with values representing currencies until you do.

Fixed-point numbers (aka "decimal" type in Java, C# and others) are the preferred way to deal with this problem as I mentioned in this same discussion hours ago.

Now, in the example you presented, you have a round-off error amplication problem where the round-off error grows larger than the epsilon. You can avoid that using the Kahan summation algorithm.

https://en.wikipedia.org/wiki/Kahan_summation_algorithm

    const arr = [0.1, 0.2, 0.9, 0.6];
    let sum = 0.0;
    let c = 0.0;
    for(let i = 0, l = arr.length; i < l; i++) {
        y = arr[i] - c;
        t = sum + y;
        c = (t - sum) - y;
        sum = t;
    }
Now you evaluate sum, and it's 1.8 as expected.

^ And of course a comment mentioning the correct way of doing things got downvoted.

> Then floating point arithmetic could be used to actually reliably test equality without ever generating false negatives.

The flip side is that you generate plenty of false positives once your error ranges get large enough. This happens pretty readily if you e.g. perform iterations that are supposed to keep the numbers at roughly the same scale.


Does this mean I could write a calculator in JavaScript which is more accurate than the language but not as fast?

For example: just treat numbers as strings and write code that adds the digits one by one and does the right carries

Now that I think about it, is this the whole point of the Java BigDecimal class?


Yes, and it would be inexcusable malpractice to implement a calculator using the native floating-point type.

lol, malpractice. What if you want a calculator that specifically uses native floating point math, like to aid in programming, or just playing with the datatype?

It's been done, there are libraries out there.

You can likely do better than this within rational numbers by working with integer numerator and denominator; you'll still have to make compromises for irrational numbers.

In python, 0.3 prints as 0.3, but it's a double, so it should be 0.299999999999999988897769753748434595763683319091796875 (according to the article, and the 0.1+0.2 != 0.3 trick also works)

What controls this rounding?

e.g., in an interactive python prompt i get:

    >>> b = 0.299999999999999988897769753748434595763683319091796875 
    >>> b
    0.3

It depends on how many decimal places you are printing

    >>> f'{b:.54f}'
    0.299999999999999988897769753748434595763683319091796875
    >>> f'{x:.16g}'
    0.3
    >>> f'{x:.17g}'
    0.29999999999999999

It is the shortest decimal number that converts back to that exact FP number. There are tons of complex algorithms for that [1].

[1] See my past comment for the overview: https://news.ycombinator.com/item?id=26054079


Isn't that essentially lying to the user?

0.299999999999999988897769753748434595763683319091796875 suggests 54 fractional digits of precision, which is misleading.

0.29999999999999998 or 0.29999999999999999 are less misleading but wasteful. Remember they are not only visible to users but can be serialized in decimal representations (thanks to, e.g. JSON).

In fact, most uses of FP are essentially lies, providing an illusion of real numbers with a limited storage. It is just a matter of choosing which lie to keep.


Golly. Surprised by floating point arithmetic?

1.99999999.... == 2.0

There are limits to computer representation of floating point numbers. Computers are finite state, floating point numbers are not.

sigh


You mean "real numbers".

Floating point numbers are one way of approximating real numbers on computers.


Yes.

> Computers are finite state, floating point numbers are not.

No, floating point numbers are finite state. That’s the whole point behind this discussion. There are only so many possible floating point numbers representable in so many bits.

I never understand this confusion - you have finite memory - with this you can only represent a finite set of real numbers. So of course all the real numbers can’t be mapped directly.


I understand the confusion. It occurs when people haven't fully grokked that floating point numbers generally use binary representation, and that the set of numbers that can be represented with a finite number of decimal digits is distinct from the set of numbers that can be represented with a finite number of binary digits. People generally know that they can't write down the decimal value of 1÷3 exactly - they just haven't considered that for the same reason you can't write down the binary value of 1÷10 exactly either.

This confusion is also helped along by the fact that the input and output of such numbers is generally still done in decimal, often rounded, that both decimal and binary can exactly represent the integers with a finite number of digits, and that the set of numbers exactly representable with in a finite decimal expansion is a superset of those exactly representable in a finite binary expansion (since 2 is a factor of 10).


Sigh

The representation means that 2.9999999999 = 3.00000000

I am not confused. You cannot use equality in floating point. Why is anybody surprised by that? That is my point

(I did say floating point before when I ment real. A bit confused!)


> You cannot use equality in floating point.

Sure you can. 1.0 + 2.0 == 3.0. There's a lot of gotchas, but it is possible to make useful comparisons.


No it does not, no it is not possible to use equality comparisons usefully in FP arithmetic

If that were true, they wouldn't have defined the equality operators for floating point types. Some numbers are exactly representable and some numbers are not [1]. It's difficult, but you can in fact reason about the exact results of floating point calculations. https://news.ycombinator.com/item?id=1847462

Don't sigh melodramatically at me.

I wasn't saying you were confused, I was saying I believe I understand how the general confusion around the whole issue arises.

(If decimal floating point had been commonly used instead of binary, the same class of issues would still exist, but I don't think people would be nearly so surprised by them).


In R, there are functions for practical equality (to within a tolerance that makes sense on the local machine), e.g.

    > all.equal(0.1+0.2,0.3)
    [1] TRUE
and functions for actual equality, e.g.

    > identical(0.1+0.2,0.3)
    [1] FALSE


Was the title of this post automatically generated? Why did it turn "0.1 + 0.2" into "0.1 and 0.2"?

Related past threads (not about this article):

0.30000000000000004 - https://news.ycombinator.com/item?id=21686264 - Dec 2019 (402 comments)

0.30000000000000004 - https://news.ycombinator.com/item?id=14018450 - April 2017 (130 comments)

0.30000000000000004 - https://news.ycombinator.com/item?id=10558871 - Nov 2015 (240 comments)

0.30000000000000004 - https://news.ycombinator.com/item?id=1846926 - Oct 2010 (128 comments)

Resisting temptation to list floating-point math threads because there are so many:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


This is why games and certain other types of coding use fixed point arithmetic.

The problem is solved in many languages such as Java by suffixing F to the numbers.

  Coq < Compute 0.1.
  Toplevel input, characters 8-11:
  > Compute 0.1.
  >         ^^^
  Warning: The constant 0.1 is not a binary64 floating-point value. A closest
  value 0x1.999999999999ap-4 will be used and unambiguously printed
  0.10000000000000001. [inexact-float,parsing]
       = 0.10000000000000001
       : float


As an aside, just finish qntm's spectacularly good 'There Is No Antimemetics Division". Highly worth a read if you're after some highly original sci-fi.

HN is more and more like first semester coding class where the professor always tells the "fun facts" but we have to be in the same class every year

Reminds me of long-standing problems in mathematics. The problems will forever be amusing until some dark horse comes out of nowhere with a formal proof/solution that stuns the academic community.

Eternal September?

If only the mind was spotless.

just let the lucky thousand of today have their fun!

You might be surprised how many people don’t understand the bit level basics these days. They’re not really the focus anymore, and they probably shouldn’t be. The point of advancing technology is to push the mundane, low level, difficulties away to make bigger concepts/abstractions easier to piece together and mentally bear.

From what I’ve seen with most recent grads, the education is shifting more and more towards algorithms, with experience mostly involving the use of existing libraries/frameworks, rather than lower level implementations that us “old timers” were forced to implement ourselves, thanks to lack of accessibility to freely usable code. I think GitHub, StackOverflow, and Google have changed the mental model of software development, significantly. I don’t think that’s a bad thing at all since it should free up some beans, especially for someone new to the field.

Not knowing this will bite you eventually, but it’s fairly trivial to work out.


Do it in Python and many other languages you'll get the same result.

    >>> 0.1 + 0.2
    0.30000000000000004
That's the expected behavior of floating-point numbers, more specifically, IEEE 754.

If you don't want this to happen, use fixed-point numbers, if they're supported by your language, or integers with a shifted decimal point.

Personally, I think if you don't know this, it's not safe for you to write computer programs professionally, because this can have real consequences when dealing with currency.


Super interesting. I'd noticed this behaviour previously, but never knew how or why this was the case (and not really bothered to search for it either). Thanks!

I think high level languages shouldn't even have floats, unless they're a special type for doing floating point math.

Specifically I'm thinking about python, the literal x.x should be for Decimal and float should have to be imported to be used as an optimization if you need it.


its ok... there are bigger mistakes/bugs at stock brokers

I use this as one of my interview questions (in a piece of code where it would run correctly if 0.1 + 0.2 = 0.3). Maybe 1/3 of interviewees recognize the cause, and maybe half of those can actually explain why and how to mitigate it. I work in scientific computing so it's absolutely relevant to my work

One of the many reasons I think we all would've been better off, had Brendan Eich decided he'd been able to simply use Scheme within the crazy time constraint he'd been given, rather than create JavaScript, :) is that Scheme comes with a distinction between exact and inexact numbers, in its numerical tower:

https://en.wikipedia.org/wiki/Numerical_tower

One change I'd consider making to Scheme, and to most high-level general-purpose languages (that aren't specialized for number-crunching or systems programming), is to have the reader default to reading numeric literals as exact.

For example, the current behavior in Racket and Guile:

    Welcome to Racket v7.3.
    > (+ 0.1 0.2)
    0.30000000000000004
    > (+ #e0.1 #e0.2)
    3/10
    > (exact->inexact (+ #e0.1 #e0.2))
    0.3
So, I'd lean towards getting the `#e` behavior without needing the `#e` in the source.

By default, that would give the programmer in this high-level language the expected behavior.

And systems programmers, people writing number-crunching code, would be able to add annotations when they want an imprecise float or an overflowable int.

(I'd also default to displaying exact fractional rational numbers using familiar decimal point conventions, not the fractional form in the example above.)


Many languages make a distinction between floating-point numbers and fixed-point numbers. Fixed-point numbers (e.g.: "Decimal" / "BigDecimal" in Java) do not suffer from this problem.

This would make particular sense in a language like python, which no one (should?) be using for systems programming.

Agreed. Though, for reasons, I had to write essentially a userland device driver in Python (complete with buffer management and keyboard decoder). It was rock-solid in production, in remote appliances, and I was very glad Python was up to that. :)

A Scheme system that implemented exact reals as unnormalized floating decimal (IEEE 754-2008), coupled with a directive that said `numbers with a decimal point should/should not be read as exact' would be wonderful, not just for financial things, but also for teaching students.

It's actually easy to implement that slight variation in Racket, as a `#lang` or reader extension.

As an example of a Scheme-ish `#lang`, here's a Racket `#lang sicp` that I made to mimic MIT Scheme, as well as add a few things needed for SICP: https://github.com/sicp-lang/sicp/blob/master/sicp/main.rkt

It would be even easier to make a `#lang better-scheme`, by defining just a few changes relative to `racket-base`, such as how numbers are read.


The reader even has a parameter `read-decimal-as-inexact` that controls how to read decimal numbers.

Good point. For exacts in `#lang`, I realized afterwards that it's not just literals/self-evals: there's some "standard library" functions you'd probably want to wrap, to force them to produce exact numbers. And maybe check that there's nothing in standard library that does inexacts internally in a way that would be unpredictable to a programmer who expects exact number behavior.

Scheme did not "look like Java" so was ruled out on that basis (also on others which I have discussed at length in several interviews, most recently with Lex Fridman).

Thanks for that interview; very interesting, and I also appreciate your words for Scheme.

For HN, I'd like to point out that it was a historical accident that Java looked like it did, as far as the Web was concerned.

IIRC, Java looked like it did to appeal to technical and shrinkwrap developers, who were using C++ or C. (When I was lucky to first see Java, then called Oak, they said it was for embedded systems development for TV set-top boxes. I didn't see Java applets until a little later.)

But the Web at the time was intended to be democratizing/inclusive (like BASIC, HyperCard, and Python). And the majority of the professional side was closer to what used to be called "MIS" development (such as 4GLs, but not C/C++). And in practice, HTML-generating application backends at the time were mostly written in languages other than C/C++.

I'm sympathetic to the rebranding of the glue language for Java applets (and for small bits of dynamic), to be named like, and look like, Java. That made sense at the time, when we thought Java was going to be big for Web frontend (and I liked the HotJava story for a thin-client browser extended on-demand with multimedia content handlers). And before the browser changed from hypertext navigator to GUI toolkit.

But it's funny that we're all using C-descendant syntax only through a series of historical accidents, when that wasn't even what the programmers at punctuated points in its adoption actually used (we only thought it would be, at the time the decisions were made).


If you think that is crazy, check out Muller's Recurrence: https://scipython.com/blog/mullers-recurrence/

> does not interpret the 0.1 as the real number

The focus should be on _rational_ numbers. This particular example is all about representation error - precision is implicated, but not the cause.

Ignore precision for a second: The inputs 0.1 and 0.2 are intended to be _rational_. This means they can be accurately represented finitely (unlike an irrational number like PI). Now when using fractions they can _always_ be accurately represented finitely in any base:

  1/10=
  base 10: 1/10
  base  2: 1/1010

  2/10=
  base 10:  2/10
  base  2: 10/1010
The neat thing about rationals, is that when using the four basic arithmetic operations: two rational inputs will always produce one rational output :) this is relevant: 1/10 and 2/10 are both rationals, there is no fundamental reason that addition cannot produce 3/10. When using a format that has no representation error (i.e fractions) the output will be rational for all rational inputs (given enough precision, which is not a realistic issue in this case). When we add these particular numbers in our heads however, almost everyone uses decimals (base 10 floating point), and in this particular case that doesn't cause a problem, but what about 1/3?

This is the key: rationals cannot always be represented finitely in floating point formats, but this is merely an artifact of the format and the base. Different bases have different capabilities:

  1/10=
  base 10: 0.1
  base  2: 0.00011001100110011r

  2/10=
  base 10: 0.2
  base  2: 0.00110011001100110r

  1/3=
  base 10: 0.33333333333333333r
  base  2: 0.01010101010101010r
IEEE754 format is a bit more complicated than above, but this is sufficient to make the point.

If you can grok that key point (representation error), here's the real understanding of this problem:

Deception 1: The parser has to convert '0.1' decimal into base 2, which will cause the periodic significand '1001100110011' (not accurately stored at any precision)... yet when you ask for it back, the formater magically converts it to '0.1' why? because the parser and formater have symmetrical error :) This is kinda deceptive, because it makes it look like storage is accurate if you don't know what's going on under the hood.

Deception 2: Many combinations of arithmetic on simple rational decimal inputs also have rational outputs from the formatter, which furthers the illusion. For example, nether 0.1 or 0.3 are representable in base 2, yet 0.1 + 0.3 will be formatted to '0.4' why? It just happens that the arithmetic on those inaccurate representations added up to the same error that the parser produces when parsing '0.4', and since the parser and formatter produce symmetric error, the output is a rational decimal.

Deception 3: Most of us grew up with calculators, or even software calculator programs. All of these usually round display values to 10 significant decimals by default, which is quite a bit less than the max decimal output of a double. This always conceals any small representation errors output by the formatter after arithmetic on rational decimal inputs - which makes calculators look infallible when doing simple math.



If anyone would like to know more then read this paper from 1991 by David Goldberg What every computer scientist should know about floating-point arithmetic, very accessible content even if you are not from CS field.

http://pages.cs.wisc.edu/~david/courses/cs552/S12/handouts/g...


Your monthly HackerNews reminder that machine epsilon is a thing.

https://www.crockford.com/dec64.html

and see me in the morning.


Obligatory SMBC [1] and xkcd [2]:

[1] https://www.smbc-comics.com/?id=2999

[2] https://xkcd.com/2170/


I guess that can be tracked back to the use of fancy new buggy tools ;)

bc <<< "0.1 + 0.2"

.3

bc <<< "1.0E4096 +1 -1.0E4096"

1.000000

node -e "console.log(1.0E128 +1 -1.0E128)"

0

python -c "print(1.0E128 +1 -1.0E128)"

0.0


Legal | privacy