Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> overflow should be an error in most languages

> Chasing safety features which result in difficult to reason about semantics will inevitably lead to low language adoption.

Python solves this problem by not having overflow for integers at all, instead all integers are unlimited. It's still one of the most widely used languages in the world.



sort by: page size:

How so? Does not python add some checks to see if an operation will overflow or not? Or something like that?

Hm? In Lisp or Python, you don't have overflow issues because ints can't overflow. It's only a problem at the assembly/low level.

EDIT: The point of the parent comment is that integer overflow "isn't a problem at the assembly level, only in high level languages." But high level languages actually prevent the problem. You don't even have to worry about ints overflowing when you use Python or Lisp. You do have to worry if you're assembly or are using a lower level language like C. So the situation is actually opposite from what the parent comment describes.


If Python handled overflow with an exception instead of a bignum, I'd bet it still wouldn't cause trouble either. The actual values generally aren't reaching bignums.

Is there really a way around this? overflow should be an error in most languages - but isn't for a variety of reasons such as simplifying the construct

c = a + b

In a language without native memory management, it's rare that this type of bug could lead to disclosure of sensitive information. In the cases it would, it's easy enough to guard the overflow with a test such as if (MAX_VAL - b) < a then error.

Chasing safety features which result in difficult to reason about semantics will inevitably lead to low language adoption. I'd be very interested in approaches which guard difficult cases while maintaining ease of use. I'd love a statically typed language which automatically increases the size of numeric type on over/underflow.


Python3 doesn't have a maximum integer and therefore cannot experience overflow when adding two integers, for example. You can keep adding one forever.

once you consider overflow and underflow

You're explicitly comparing apples and oranges here. Are you implying that it's never necessary to check for overflow and underflow in dynamically typed languages, and that it's somehow mandatory to do so in statically typed ones? Or, in case your argument is "numbers in dynamically typed languages do not overflow", are you aware of BigDecimal in Java or sys.maxint in Python 2?


> Sure, you'll lose a bit of performance due to checks for overflow...

Which is why the author wants support for integer overflow traps. He even mentions Python, which does what you describe, and suggests that other languages don't do this precisely because of the performance hit for which he's proposing a solution.


I love the "check if integer addition will overflow" in Python:

def adding_will_overflow(x, y): return False

I have no idea if it's actually true but I find that funny.


It's Python, so there is no overflow. I considered mentioning this in a footnote but I didn't think it was necessary.

How so? From [1] it would seem that Lisps (idiomatic ones at least), Python, Perl, Haskell and Ruby are all free from the possibility of integer overflow. That definitely doesn't sound like "nearly all code".

[1] https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic


It's a little glib, but Python 3 (and Ruby, I believe) both has checked array access and vacuously has overflow checking given integers can't overflow (by default).

I think those cases are a minority, and in most cases you would have been fine with just slightly larger ints. It doesn't seems to cause any issue in python for example, because the cases where integers grow exponentially (which is the same as making their size grow linearly, so its still slow memory-wise) and without bounds are not all that common.

For example, an overflow bug was famous in java for being present in the standard library for a very long time: in a core function performing a binary search, the operation computing the middle of the two bounds was a simple average (max + min)/2, which overflow when max and min are big enough, even if the final result always fit in a standard integer.

In a lot of other cases, integer are used to address memory (or to represent array indexes, etc), so in most code they will naturally be restricted by the available amount of memory, even if intermediate computations can temporarily overflow.


> But with integer overflows

Some languages give exceptions here, unless specially told not to.


> I don't see how "special rules of arithmetic and a robust mathematical model" makes it "extremely difficult to have a serious malfunction."

The only way would be to have arbitrary precision integers by default and thus no overflows ever. (IIRC smalltalk was something like that, but it was long time ago since I used it, so I'm not sure)


Of course with well defined overflow you can still end up with a nonsense result that just happens to be in your valid range of values. If you don't plan to harden your code against integer overflow manually you need a language that actively checks it for you or uses a variable sized integer to be safe.

Isn't the obvious solution to the problem of overflows to define the behavior like pretty much all newer languages did it (presumably because they learned from the errors committed by C)?

The fact many languages don't overflow check by default really saddens me. Integer overflow is the cause of so many bugs (many user-facing: http://www.reddit.com/r/softwaregore/), and yet people keep making new languages which don't check overflow. They check buffer overruns, they check bounds, and yet not integer overflow. Why? The supposed performance penalty.

The reckless removal of safety checks in the pursuit of performance would be considered alarming were it not commonplace.

(Disclaimer: I really, really care about integer overflow for some odd reason, going so far as to be going through the entire PHP codebase to add big integer support...)


I don't see how this is any different from overflowing an int32 in any other programming language. Sure, with an int, x != x + 1 even after an overflow, but your program is still going to crash when it tries to bill you for negative two billion widgets.

If you're dancing on the edge of the limits of numerical representation then you need to write code to protect against bad things. If you don't write said code, your program is going to fail to work correctly, no matter what language you use.


Personally, I'd like to see more programming done in languages that simply don't allow integer overflow in the first place. Most current languages have arbitrary-precision integers; well-implemented arbitrary-precision integers are quite efficient when they fit in a machine word, and as efficient as possible when larger. Sure, you'll lose a bit of performance due to checks for overflow, but those checks need to exist anyway, in which case it seems preferable to Just Work rather than failing.
next

Legal | privacy