Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Is there a reason it wouldn't be 16 bits like in C?


sort by: page size:

It's 16 bytes, not 16 bits.

It's actually 8/16 bit in 200 lines of C.

When can a char be 16 bits? I presume it'd still have a sizeof() of 1 though.

16 bit is not enough?

16-bit bytes?

Because 4 bits less precisely specifies the value of the parameter than 16 bits does.

no, int is 16 bits minimum

that's how much you can address with 16 bit

I have seen a DSP processor that could address only 16-bit words. And C compiler did not fix it, bytes had 16 bits there.

It's also 16 bit for the AVR architecture used on Arduino devices.

IIRC the real rationale is back in the early days people bet that 16 bits would be enough for a fixed length encoding, but the bet didn't pay off and now they're stuck with the worst of both worlds.

Yeah, thanks. I went to investigate and was baffled to find that it was speedier but still, 16 bits. Insane stuff!

16-bit

Except that’s not been true in a while, and technically this assumptions was not kosher for even longer: C itself only guarantees that int is 16 bits.

That doesn’t prove that the chip operates at 16 bits. For example, we could do 18-bit multipliers (or anything >= 16) and still use 16-bit floats.

You use 16-bit variables.

If 16 bits is enough, then you should be using int16_t. If 16 bits is not enough, then you should not be using int.

I agree. But, even if there was, say, a uint16 type in C, it wouldn't be the grammar that defines it as 16 bits.

I’ve worked with Arduinos where int was 16 bit.
next

Legal | privacy