AFAIK until the switch to 64-bit architectures, int actually was the natural word size. Keeping int at 32-bits was probably done to simplify porting code to 64-bits (since all structs with ints in them would change their memory layout - but that's what the fixed-width integer types are for anyway, e.g. int32_t).
In hindsight it would probably have been better to bite the bullet and make int 64 bits wide.
IMHO it's only weird because the promotion is to 32-bit integers on 64-bit platforms. If all math would happen on the natural word size (e.g. 64-bits), and specific integer widths would only matter for memory loads/stores, it would be fine.
> AFAIK until the switch to 64-bit architectures, int actually was the natural word size
32-bit int is still arguably the native word size on x64. 32-bit is the fastest integer type there. 64-bit at the very least often consumes an extra prefix byte in the instruction. And that prefix is literally called an "extension" prefix... very much the opposite of native!
In hindsight it would probably have been better to bite the bullet and make int 64 bits wide.
reply