Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Python3 doesn't have a maximum integer and therefore cannot experience overflow when adding two integers, for example. You can keep adding one forever.


sort by: page size:

I love the "check if integer addition will overflow" in Python:

def adding_will_overflow(x, y): return False

I have no idea if it's actually true but I find that funny.


> overflow should be an error in most languages

> Chasing safety features which result in difficult to reason about semantics will inevitably lead to low language adoption.

Python solves this problem by not having overflow for integers at all, instead all integers are unlimited. It's still one of the most widely used languages in the world.


That's kind of a bad example because adding integers can overflow (in some languages).

If Python handled overflow with an exception instead of a bignum, I'd bet it still wouldn't cause trouble either. The actual values generally aren't reaching bignums.

"Overflowing an int" is something that I worry about when I'm thinking in terms of memory layout of data. I shouldn't have to worry about it when working at a higher level of abstraction. Python handles this correctly:

  >>> int('0x7fffffff', 16)
  2147483647
  >>> int('0x7fffffff', 16) + 1
  2147483648L

How so? Does not python add some checks to see if an operation will overflow or not? Or something like that?

It's a little glib, but Python 3 (and Ruby, I believe) both has checked array access and vacuously has overflow checking given integers can't overflow (by default).

It's Python, so there is no overflow. I considered mentioning this in a footnote but I didn't think it was necessary.

Integer overflow.

A simple fix is using unsigned integers, as unsigned overflow is well-defined.

(There isn't a way in C/C++ to request an exception or fault on integer overflow, which is what you'd want here, since you're running inside Python. A better conceptual fix is just doing the addition in Python, which is a more robust language than C/C++ for signed integer addition, but that kind of defeats the point of the example.)


Yep! The addition here can overflow, too, just like in the integer case (though you'll get an infinite value instead of wraparound).

I think those cases are a minority, and in most cases you would have been fine with just slightly larger ints. It doesn't seems to cause any issue in python for example, because the cases where integers grow exponentially (which is the same as making their size grow linearly, so its still slow memory-wise) and without bounds are not all that common.

For example, an overflow bug was famous in java for being present in the standard library for a very long time: in a core function performing a binary search, the operation computing the middle of the two bounds was a simple average (max + min)/2, which overflow when max and min are big enough, even if the final result always fit in a standard integer.

In a lot of other cases, integer are used to address memory (or to represent array indexes, etc), so in most code they will naturally be restricted by the available amount of memory, even if intermediate computations can temporarily overflow.


> Arithmetic operators instead of XOR In that case, It uses Python that supports arbitrary long numbers out of the box so overflow of integers wouldn't be an issue(in Python3 default is arbitrary long integers).

Integer overflow?

Integer overflow?

Hm? In Lisp or Python, you don't have overflow issues because ints can't overflow. It's only a problem at the assembly/low level.

EDIT: The point of the parent comment is that integer overflow "isn't a problem at the assembly level, only in high level languages." But high level languages actually prevent the problem. You don't even have to worry about ints overflowing when you use Python or Lisp. You do have to worry if you're assembly or are using a lower level language like C. So the situation is actually opposite from what the parent comment describes.


Yeah, integers can't overflow. It will just use all of your memory.

Unless you're using a C bind, like numpy.


The problem with overflow isn't just needing to store numbers larger than 2 billion. Sometimes, intermediate values are larger than that even if the final result isn't.

Take averaging as a very simple example. Doing (a + b) / 2 will overflow if a and b are sufficiently large, even if the average will always fit in 32 bits. Things like this go unseen for years.


How so, integer overflow? Not so famous that I'm aware.
next

Legal | privacy