This mindset leads to a multitude of bugs and brittle non-portable code. Almost as bad as sticking pointers into a uint32_t and breaking portability off of 32-bit platforms. int is 16-bits minimum. If 2^15-1 isn't enough range, switch to long and stop writing brittle code.
I call a bluff on that. 64KB is small enough that it's not just yet another target. It requires programming in a completely different way, one that is unacceptably ugly and contorted for 32-bit and larger targets.
If you're writing code and not specifically targetting 64KB systems, your code will be completely unusable on such systems anyway. Most programs and libraries written for larger platforms will have more code than a 16-bit target can even address.
Even if you use theoretically correct sizes, they'd still be inappropriate. 16-bit lengths are a bloat if you could fit data in 8 bits, and you'd try hard to do so. There's hardly any room for stack, so even function arguments and local variables are a luxury to be avoided. Real 16-bit programs are often just a one huge function and all global variables.
There are platforms with 16-bit int and a larger than 16-bit address bus, such as AVR and Amiga.
> Real 16-bit programs are often just a one huge function and all global variables.
I've written software for extremely resource-constrained microcontrollers (7KB flash, 256 bytes of RAM) in "normal C" with many functions, code organized in multiple files, etc. In one case it was to replace firmware written by someone with your mindset, and the resulting code was smaller while having more features.
reply