Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Interval arithmetic certainly has its place. However, you don't find it used more often because a naive implementation results in intervals that are often uselessly huge.

Consider x in [-1, 1], and y in [-1, 1]. x*y is also in [-1,1], and x-y in [-2, 2]. But now consider actually that y=x. That's consistent, but our intervals could be smaller than what we've computed.



view as:

Sure, but wouldn't realistic intervals be more like x in [0.29999999999999996, 0.30000000000000004]?

I mean, intervals as large as whole numbers might make sense if your calculations are dealing with values in the trillions and beyond... but isn't the point of interval arithmetic to deal with the usually tiny errors that occur in FP representation?


Legal | privacy