having a "bigintmax_t" would...... work, but it's absolutely horrible and defeats the purpose of intmax_t being.. um.. the maximum integer type.
A struct of two integers couldn't be used in regular & bitwise arithmetic, added to pointers, index arrays, cast up and down to other integer types, etc.
As-is, you can pass in & out intmax_t as a default case and, worst-case, you just waste space. But "uint128_t a = ~UINTMAX_C(0)" not making a variable of all 1s bits would be just straight up broken.
If you don't standardize intmax_t, programs will invent the concept for themselves through some combination of preprocessing directives and shell scripts that probe the toolchain.
If you want to know "what is the widest signed integer type available", you will get some kind of answer one way or another.
The pain seems to originate from the wish to keep one ABI for an architecture even though the architecture is mutable and constantly changing. The intmax_t type was made to represent the largest signed integer type on an architecture. If you modify the architecture to support a new, larger, integer type, you have now created a new architecture, with a new ABI. If you do this, there is no problem with intmax_t.
But of course, ABI’s are important, since proprietary binary blobs are what is important.
Good catch! I mostly used __uint128_t out of haste. Ideally, I would like to remove dependencies on large integer types. For now, this remains future work.
Curiously, C actually has bignums. Now. In C23, they added a _BitInt(N) type (e.g., "_BitInt(1024)" for a 128-byte type).
The compiler support for that is limited, though. To let N be >128 in Clang, -fexperimental-max-bitint-width=N flag can be provided. If N>128 and _BitInt(N) is divided by something, the compiler will just crash, but +, -, * all work as expected.
I may be late to answer here, but no you couldn't reasonably make int a bignum in C. Unsigned integers are defined to wrap around, so certainly not there.
Signed overflow is undefined behaviour so the compiler and runtime can do anything. But, where would it store the extra bits? It could call malloc, but it wouldn't know when to free. It would violate the expectation that memory is managed manually in C. Not all C code even uses malloc. Or, what if the int is in a struct? Should the implementation go allocate extra overflow space elsewhere, and then what happens when you memcpy the struct?
I think there is a strong impedance mismatch between bignums and languages that aren't garbage collected.
"Extended integer types may be wider than intmax_t". I'm sure there's a good reason for this, but it was introduced in C99, which says (in 7.8.1.5): "[intmax_t] designates a signed integer type capable of representing any value of any signed integer type".
That was already portable between 16 bit, 32 bit, 64 bit etc. Why is it that just because the compiler supports 128 bit or 256 bit integers that compiling in such a mode doesn't correspondingly update "[u]intmax_t"?
The linked page says they 'cannot be "extended integer types" in the sense of C17', but that printf() and scanf() should still support these?
There is one solution that would keep the crazy semantics of C, but would still allow for 2's complement arithmetic to be well-defined when one wants to.
C99 defines int8_t, if it exists, to be a 2's complement signed integer of exactly 8 bits. Same for 16, 32, etc. The standard could very well define behavior on overflow for these (that is, turn them into actual types instead of typedefs), and leave int, long, etc alone. I think this would be a viable, realistic, solution. Integer conversions would probably still be a pain, though.
The problem isn't "intmax_t". The problem is "int".
If you have an ABI, you need to put an explicit size and signedness on every parameter and return value. Period. No excuses.
No "int". No "unsigned int". If I'm being really pedantic, don't even use "char".
It should be "int32_t", "uint32_t", and "uint8_t".
Every time I see objections, it's always someone who wants to use some weird 16-bit architecture. The problem is that those libraries probably won't work anyhow since nobody tests their libraries on anything other than x86 and maybe Arm. If your "int" is 16 bits, you're likely to have a broken library anyway.
It's not true that (unsigned)INT_MAX + 1 is never 0 - it's allowed for UINT_MAX == INT_MAX (leaving the sign bit unused in the unsigned representation). I've never seen such an implementation, though.
> in C, the int type can be as low as 16 bits in size, yielding "65 thousand-something"
Wrong both for worst-case C and for "16 bits in size": the actual maximum is "32 thousand-something" (specifically 32767 in 2s-complement and also in most of the stupid representations (like 1s-complement or sign-magnitude), although there might be some that have, eg, 32768). They also have a minimum of -32768 (or -32767 or otherwise for some of the stupid representations).
You could intepret it as "65 thousand-something" values between the minimum and maximum, but that strongly implies that the minimum doesn't need to be specified, which only works for unsigned integers (which C int is very much not).
Smart programming languages have bignum integers and seamless promotion between fixnum and bignum versions.
Thus they avoid the issue completely, no stupid casts (especially casts which change both signedness and size), no overflows, no nothing, just an integer length.
It's a perfect world and everyone should try to achieve it.
C is not such programming language, with the closest approximation being uintmax_t.
If we're using our time machines to fix C, can we make a special type that's identical to unsigned int but can use all the bit-wise operators, requiring an explicit conversion operators between regular int and bitwise int.
That is true but it's not the reason... INT_MAX + 1 must be 0 for unsigned int because the C standard defines wrap-around behavior for unsigned data types. Saturated data types are only defined in Embedded C. E.g.: Sat Fract
A struct of two integers couldn't be used in regular & bitwise arithmetic, added to pointers, index arrays, cast up and down to other integer types, etc.
As-is, you can pass in & out intmax_t as a default case and, worst-case, you just waste space. But "uint128_t a = ~UINTMAX_C(0)" not making a variable of all 1s bits would be just straight up broken.
reply