Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

AFAIK until the switch to 64-bit architectures, int actually was the natural word size. Keeping int at 32-bits was probably done to simplify porting code to 64-bits (since all structs with ints in them would change their memory layout - but that's what the fixed-width integer types are for anyway, e.g. int32_t).

In hindsight it would probably have been better to bite the bullet and make int 64 bits wide.



view as:

int being originally native word size is the reason for weird integer promotion rules.

IMHO it's only weird because the promotion is to 32-bit integers on 64-bit platforms. If all math would happen on the natural word size (e.g. 64-bits), and specific integer widths would only matter for memory loads/stores, it would be fine.

> AFAIK until the switch to 64-bit architectures, int actually was the natural word size

32-bit int is still arguably the native word size on x64. 32-bit is the fastest integer type there. 64-bit at the very least often consumes an extra prefix byte in the instruction. And that prefix is literally called an "extension" prefix... very much the opposite of native!


32-bit also often has the prefix byte (if one of the operands is r8-r15 or, for extending moves from 8-bit registers, r4-r15)

Aren't 32-bit registers/operations also called "extensions" of their 16-bit counterparts on the x86 line, due to the ISA's 16-bit 8086/80286 lineage?

So could one make the argument that a 16-bit int ought to be the native word size on x64?


No. This isn't about dictionary pedantry. 16-bit is actually frequently more expensive than 32-bit on x86.

int is the smallest type you can do ALU ops on, so as long as x64 can still do 32-bit arithmetic, it's "natural" for it to remain 32 bits.

Legal | privacy