Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That crossed my mind; I really hope this isn't the case, because that would mean a lot of people who should be expected to know these sorts of things with such familiarity that it's like their lives depended on it, actually don't.

Now thanks to 2048, if it did turn out to be 16-bit overflow, maybe quite a lot more of the general public would understand this too...



sort by: page size:

Do you (or anyone) have some idea why anyone could possibly have thought 16 bits would be enough? Many decisions are bad in hindsight but surely no hindsight was needed for that.

16 bit should be enough for everybody.

I seriously hope that Qt guys aren't relying that an unicode character (whatever that means) fits in 16 bits.

16 bit is not enough?

but 16 bits >> 8 bits? Can the AI community get by with 8 bits?

16-bit? Did they mean 4-bit?

But most of these are 16-bit...

> Most people assume that a byte is 8-bits which isn't true.

It is worth remembering that POSIX (and Windows probably too) mandate 8bit chars so there is no point being defensive about it on these particular platforms. And I kid you not, I have seen people who are. Because "ISO C99 this and that".


You might be right, however 16-bit sounds really harsh to my ears, and 24-bits is the only widely used standard, better than 16-bit.

Even crazier, it's just a 16 bit value.

What are they gonna do when they run out?


I think the author was just suggesting this is due to 256 causing an 8 bit unsigned integer to overflow.

Well, they're all forgetting a factor of 8 bits per pixel...

(and people have talked about doing 16-bit subsets)

Weird that they chose to make it harder for themselves and do it in 16-bit code.

Has nothing to do with 8-bit. Deep OS level knowledge can and should be taught starting from 64 bit (where most OSs run) and maybe tiny speck of 16 bit because most BIOS still runs on 16 bit intel

It's about exposure to syscalls, C, assembly, lower level debugging and just making sure people aren't afraid to touch that layer. Same goes with other foundational knowledge like packets and networking protocols


> There’s usually no need to go beyond 16-bit accuracy, and most of the time when you go to 8-bit accuracy there is too much loss of resolution.

I'm not sure this is accurate. From what I have seen, 8-bit quantization is usually fine, and even 4-bit is a viable tradeoff. Here are some benchmarks from TextSynth showing no significant degradation between 16 and 8 bit:

https://textsynth.com/technology.html


Only if you'd been raised on Java.

There had been the shift from 16->32 bit not long before that, so it's not that much of a stretch, and 8->16 before that (well after I got my first computer).

We found an almost identical bit-mismatch bug in a programming language that led a bitmap of tcp connections to appear to go crazy on a customer install in a core piece of software. These bugs cost.


What? 8- and 16-bit are never 32-bit aligned nor sized.

Thats not what they mean. Title should read '2048 bit'
next

Legal | privacy