Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

If 16 bits is enough, then you should be using int16_t. If 16 bits is not enough, then you should not be using int.


sort by: page size:

16 bit is not enough?

> If 16 bits is enough, use an explicitly 16-bit integer.

This will often generate more code. So don't do this except when it saves a modest amount of memory.


My bad - that's a typo. I meant int16_t, the same as in the sample code.

no, int is 16 bits minimum

You use 16-bit variables.

You have an int16_t? How do you port it to a machine without a 16-bit type?

Hint: you're being done a favor by being forced to think about that.


C does require that int is at least 16 bits (e.g., INT_MAX has to be at least 32767).

Is there a reason it wouldn't be 16 bits like in C?

On 8 bit machines it's the reverse, an int being 16 bits minimum requires more operations to handle than an 8 bit number. Pass an 'int' push two things on the stack. Etc. Causes your code size to balloon noticeably.

I’ve worked with Arduinos where int was 16 bit.

> 65535

Why limit yourself to 16bit?


Do you have any reason for supporting uint16_t instead of uint_least16_t?

Their point is against portability as they may not exist at all.


Only bad programmers multiply by 16.

The good ones use a bit shift.


16 bit should be enough for everybody.

I wrote 16 when I meant 8bits. For this arch you cannot have 8 but types. Casting a int16_t* to char* won't let you access upper and lower bytes as most people would except (c standard does not define this behavior).

> Take some correct code that works with int16_t, and replace the "int16_t" with plain "int". What breaks?

Memory. Defaulting to 16-bits when an 8-bit variable will do can be incredibly wasteful on an 8-bit µC. Keep in mind that we are not just talking about one variable in isolation. We are talking about all the integers we pass between functions. We are talking about code space, data space, and stack space. There are compilers that can optimize their arithmetic operations to 8-bit registers when they can be sure that that's all their operands need.

> Would you suggest a uint29_t?

If you are aware of a machine that provides that, yes! Otherwise, I'd suggest uint32_t. Yes, "long" is guranteed to be 32-bits wide, but it can also be 64-bits wide. I would not recommend defaulting to "long", as that could be wasteful. Here's an interesting discussion about the meaning of "long" and "long long" for their compiler: https://www.dsprelated.com/showthread/comp.dsp/42108-1.php. I see this discussion as a failure of the C standard.

I would much rather the compiler provided int64_t or int40_t or whatever else they can, that is not inefficient.


16-bit bytes?

If you use "int", you know you have at least 16-bits to play with. Take some correct code that works with int16_t, and replace the "int16_t" with plain "int". What breaks?

1. If you are dumping it out to file or similar (i.e. assuming the layout in memory) then your code is no longer portable between different endian machines.

2. If you are relying on overflow behaviour, then your code is broken already (this doesn't apply to unsigned types, but using minimum width types and explicit mods/masks is likely going to be clearer code anyway).

3. The one place where you can't replace int16_t with int is in the case where you make use of the assumption that an int16_t can represent -0x8000 and you then port to a machine that doesn't use two's complement and int is 16-bits... I'm not aware of any such machine, but that is about the extent of it.

What type to use for a 29-bit CAN identifier? A "long" will do just fine. It is guaranteed to be at least 32-bits wide, and also to exist. Would you suggest a uint29_t?


C std says int has to be at least 16 bits. http://en.cppreference.com/w/cpp/language/types
next

Legal | privacy