Indeed, 0.1 can be represented exactly in decimal floating point, and can't be represented in binary fixed point. It's just that fractional values are currently almost always represented using binary floating point, so the two get conflated.
Sure. But the inability to represent 1/3 in decimal is just the same as inability to represent 1/10 in binary. The binary case just seems different because we are using a foreign base (decimal) literal "0.1" to express the binary number.
decimal is still floating-point, just decimal floating-point instead of binary. Things like 1/3 can still not be represented exactly. It's just that for values given in base 10 there's no transformation of the representation and 0.3 remains exactly 0.3.
that's slightly incoherent and reflective of the same confusion i identified; there is strictly speaking, no such thing as a decimal number, only a decimal numeral. normally the difference is subtle, but in this case it's the essence of the issue
'decimal' means 'base 10'
numbers, the abstract entities we do arithmetic on, are not decimal or in any other base; it is correct that arithmetic on them is not in any sense lossy
decimal is a system of representation that represents these abstract numbers as finite strings of digits; these are called 'numerals'. it can represent any integer, but only some fractions, those whose denominator has no prime factors other than 2 or 5. such fractions arise non-lossily as the result of the arithmetic operator of division when, for example, the dividend is 3. representing these in decimal requires rounding them, so decimal is lossy
binary floating point is lossy in the same way, with the additional limitation to only being able to represent fractions whose denominator is a power of 2, whose numerator does not have too many significant bits (53 most often), and which are not too positive, too negative, or too small
there are other systems of represention for numbers that are not lossy in this way. for example, improper fractions, mixed numbers, finite continued fractions, and decimal augmented with overbar to indicate repeated digits
"Decimal" usually refers to a data type which is "integer, shifted by a known number of decimal places". So, for example, if you had an amount of money $123.45, you could represent that as a 32-bit floating point number with (sign=0 (positive), exponent=133, mantissa=7792230) which is 123.4499969482421875, but you would probably be better off representing it with a decimal type which represents the number as (integer part=12345, shift=2), or as just a straight integer number of cents.
The number base is relevant, because money is discrete, and measured in units of exactly 1/10^n, and if you try to use floating point numbers to represent that you will cause your future self a world of pain.
Decimal means base 10. Decimus is the Latin word for 10, and deci- is the SI prefix for 1/10, e.g. decimeter is 1/10 of a meter. Decimal floating point is x*10^y whereas binary floating point is x*2^y . Compare to hexadecimal which means base 16.
Surely you mean integer, not "decimal", right? I'm not an expert, but to my understanding none of the values in the bitcoin protocol are expressed as base-10 fractions.
It's a nit, obviously, but if you're going to ding someone for naive mistakes...
Computers can be made to understand base 10 though, if needed. We do so routinely when amounts of money are involved so a cent is a cent, even though 1/100 is nonterminating as a binary fraction.
Think about all tricks you can so in decimal, but in base 2 instead.
E.g in decimal you can multiply by ten by appending a zero, so in binary you multiply by 2 by doing so. And left shifting by 1 is what appends a zero.
Or you can take approximate log base ten in decimal by counting amount of digits, so in binary you can approximate log2 this way (counting up to most significant one-bit). Etc...
> The binary representation of a Decimal value consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the 96-bit integer and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28
That's silly, in math those two are just different notations for the same thing, and usually fractions are preferred.
Also anyone who's done any bit of programming should know that numbers as represented by a computer are discrete, while mathematics deal with symbolic relations which might require infinite amount of data to numerically represent.
Ah yes, you're right. I should have said "fractional numbers". Sorry, thinking of "DECIMAL" as being "numbers with decimal points" is a result of writing too many MySQL [1] schemas :)
Any exact decimal representation of a specific binary floating-point number that's finite and not an integer must end in the digit '5' (perhaps with trailing zeroes). This is because its fractional part is (the sum of) a set of powers of two with negative exponents, and their exact decimal representations (0.5, 0.25, 0.125, &c.) all end in '5' (proof by induction is obvious and left to the reader).
decimal: Not base 10, but fractional, those with a decimal point binary: Not just 0 or 1, but base 2.
0.01 is a binary positive decimal number which is 0.25 in base 10.
reply