Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Good catch! I mostly used __uint128_t out of haste. Ideally, I would like to remove dependencies on large integer types. For now, this remains future work.


sort by: page size:

The correct and portable solution is to not use uint32_t, but to use uint8_t, and construct the integers manually.

I have made a point of using uint16_t, uint32_t, int16_t, int32_t, etc. to be explicit. Some don't like the look, but it is explicit and helpful for me.

Eh. I just cast both to a bigger integer type where possible, which in practice, is almost always. So if I'm averaging two uint32_ts, I just cast them to uint64_t beforehand. Or in Rust, with its lovely native support for 128-bit integers, I cast a 64-bit integer to 128-bit.

FWIW, I find myself using unsigned ints more often today than I used to.

Even more so, I find myself using particular sizes of unsigned ints (uint8_t, uint16_t, etc) rather than the default "unsigned int"


Probably not a big deal assuming uint8_t is a typedef for unsigned char.

You can just expand your example to use 16-bit values or switch to uint8_t. Bitfields with signed integers are also a minefield so it's best to never attempt it.

Most of the time you don't actually care about the exact size of your integer types (especially for signed integers).

At work, we use C90 with a few C99 extensions which can be disabled through the preprocessor (e.g. inline, restrict) and although we stared out with a stdint.h like thing, it is proving to be less useful.


This is really annoying on architectures like ARM32 where size_t is closely related to unsigned int but uint32_t is long unsigned int and gets flagged as a different type. It becomes a real problem when using a stripped down printf like the one in newlib that doesn't support %zu.

Its easily fixed though: either compile on a 32bit machine, or change that line to type-cast to uintptr_t instead of unsigned int.

I try to never use int personally, if uint8_t et al. exist, it's for a reason.

> Is the uint8_t just "no point in using something bigger" or does it likely help the compiler? Does/can the signedness matter as well as the size?

In a good world you could use just uint_fast8_t and compiler would optimize this question for you. In real world I don't think compilers are smart enough, or there are too many other constraints limiting them :(


I can think of lots of cases where carelessly using int might cause issues. And none at all where it's actually better than using a different type. Aside from anything else, it's probably better to default to unsigned types.

I'm fully willing to believe that MISRA-C contains a lot of silly, unnecessary rules, but IMO this isn't one of them. If typing out uint32_t, etc is too long, then you can always create an alias.


They do have some use as a “currency type” - it’s fine if all your code picks i32, but then you might want to use a library that uses i64 and there could be bugs around the conversions.

And C also gives you the choice of unsigned or not. I prefer Google’s approach here (never use unsigned unless you want wrapping overflow) but unfortunately that’s definitely something everyone else disagrees on. And size_t itself is unsigned.


Indeed. The trick here appears to be to write (1U*x << 31) so that 1U*x becomes a type that is the larger of uint32_least_t and unsigned int.

having a "bigintmax_t" would...... work, but it's absolutely horrible and defeats the purpose of intmax_t being.. um.. the maximum integer type.

A struct of two integers couldn't be used in regular & bitwise arithmetic, added to pointers, index arrays, cast up and down to other integer types, etc.

As-is, you can pass in & out intmax_t as a default case and, worst-case, you just waste space. But "uint128_t a = ~UINTMAX_C(0)" not making a variable of all 1s bits would be just straight up broken.


You're suggesting using the same integer type everywhere. If so you can use `uint64_t`.

To put it another way:

> If they're both int32_t, there's no possible problem.


Unfortunately if I were to try to use unsigned ints extensively, I'd be casting them back and forth constantly, because nearly every standard library function that I'd want to use expects int inputs or outputs ints.

A small improvement in correctness is washed out by a huge decrease in ergonomics.


I miss the uint_64 or uint_32 in Java. It's annoying since the world of C talks in unsigned a lot on network protocols.

Whenever I write ATmega32P code, I add the following to a header file:

>> typedef unsigned char uint8;

And then I use uint8 by default (loop counter, accumulator, constant, etc.), unless a larger size number is required.

next

Legal | privacy