Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I’ve worked with Arduinos where int was 16 bit.


sort by: page size:

An int in C was 16 bits until about 1980 when Unix started being ported to larger machines. C and Unix were originally just for the PDP11.

It's also 16 bit for the AVR architecture used on Arduino devices.

no, int is 16 bits minimum

Avr microcontrollers still have 16bit ints, probably 8051 and pic too, but I don't use those. Lots of people do though. TI dsp uses 48 bit long, so don't count on int and long being the same either.

You use 16-bit variables.

Dont a lot of 8 bit micros still use 16 bit ints?

Except that’s not been true in a while, and technically this assumptions was not kosher for even longer: C itself only guarantees that int is 16 bits.

FWIW, I've actually used a really silly chip once that had a horrible-tastic toolchain that it came with, where char and int were the same size, and both either 16 or 32 bits (I forgot whether it was 16 or 32, unfortunately). (In case anyone doesn't notice: on such a system, sizeof(int) is still 1, as it is measured in units of sizeof(char).)

And IIRC it's int16_t/uint16_t max.

Then we would be stuck with 16 bits integers. We are lucky to be in a point in time where integer types have been stable for a while, but it wasn't always so.

A very long time ago, the Microsoft C/C++ compiler used 16 bit ints. I had a boss that insisted we use long instead of int because he had been burned by this. Hadn't been a problem for at least 20 years, but that didn't matter to him.

My bad - that's a typo. I meant int16_t, the same as in the sample code.

Even 16bit.

usize can even be 16 bits, when targeting small microcontrollers and vintage architectures.

Because they are using 16 bit unsigned integers?

You have an int16_t? How do you port it to a machine without a 16-bit type?

Hint: you're being done a favor by being forced to think about that.


That doesn’t prove that the chip operates at 16 bits. For example, we could do 18-bit multipliers (or anything >= 16) and still use 16-bit floats.

When referring to the Arduino, unless explicitly stating a specific chip, most people are actually referring to the AT328P chip, which is a chip with 8-bit architecture (registers are 8-bit, 16-bit data types are split between a high and a low register). It has 32 KB programmable memory, and 2 KB RAM.

Indeed, though, on the Arduino platform, an int is usually 16 bit, but not because that's the chip's native archticture, but because that's the range required. Though I can't say for sure, I highly doubt (and now am actually almost completely certain in my doubts) that it would promote two chars to ints to perform operations on them, and last night also realized why, namely, there are counts for the number of cycles required to do operations on various data types [0].

If what you say about integer promotion is true, there would be no difference in clock cycles between an int and a byte, but there is.

So in essence, no, the Arduino is definitely somehow deviating from spec when it comes to promotion.

[0]: https://forum.arduino.cc/index.php?topic=92684.msg696420#msg...

EDIT: I just realized, I think I need to actually look into the (Atmel's) AVR specs, as the Arduino IDE is basically just a wrapper for that.


Is there a reason it wouldn't be 16 bits like in C?
next

Legal | privacy