Good catch! I mostly used __uint128_t out of haste. Ideally, I would like to remove dependencies on large integer types. For now, this remains future work.
I have made a point of using uint16_t, uint32_t, int16_t, int32_t, etc. to be explicit. Some don't like the look, but it is explicit and helpful for me.
Eh. I just cast both to a bigger integer type where possible, which in practice, is almost always. So if I'm averaging two uint32_ts, I just cast them to uint64_t beforehand. Or in Rust, with its lovely native support for 128-bit integers, I cast a 64-bit integer to 128-bit.
You can just expand your example to use 16-bit values or switch to uint8_t. Bitfields with signed integers are also a minefield so it's best to never attempt it.
Most of the time you don't actually care about the exact size of your integer types (especially for signed integers).
At work, we use C90 with a few C99 extensions which can be disabled through the preprocessor (e.g. inline, restrict) and although we stared out with a stdint.h like thing, it is proving to be less useful.
This is really annoying on architectures like ARM32 where size_t is closely related to unsigned int but uint32_t is long unsigned int and gets flagged as a different type. It becomes a real problem when using a stripped down printf like the one in newlib that doesn't support %zu.
> Is the uint8_t just "no point in using something bigger" or does it likely help the compiler? Does/can the signedness matter as well as the size?
In a good world you could use just uint_fast8_t and compiler would optimize this question for you. In real world I don't think compilers are smart enough, or there are too many other constraints limiting them :(
I can think of lots of cases where carelessly using int might cause issues. And none at all where it's actually better than using a different type. Aside from anything else, it's probably better to default to unsigned types.
I'm fully willing to believe that MISRA-C contains a lot of silly, unnecessary rules, but IMO this isn't one of them. If typing out uint32_t, etc is too long, then you can always create an alias.
They do have some use as a “currency type” - it’s fine if all your code picks i32, but then you might want to use a library that uses i64 and there could be bugs around the conversions.
And C also gives you the choice of unsigned or not. I prefer Google’s approach here (never use unsigned unless you want wrapping overflow) but unfortunately that’s definitely something everyone else disagrees on. And size_t itself is unsigned.
having a "bigintmax_t" would...... work, but it's absolutely horrible and defeats the purpose of intmax_t being.. um.. the maximum integer type.
A struct of two integers couldn't be used in regular & bitwise arithmetic, added to pointers, index arrays, cast up and down to other integer types, etc.
As-is, you can pass in & out intmax_t as a default case and, worst-case, you just waste space. But "uint128_t a = ~UINTMAX_C(0)" not making a variable of all 1s bits would be just straight up broken.
Unfortunately if I were to try to use unsigned ints extensively, I'd be casting them back and forth constantly, because nearly every standard library function that I'd want to use expects int inputs or outputs ints.
A small improvement in correctness is washed out by a huge decrease in ergonomics.
reply