In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes (64 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.
Decimal64 supports 16 decimal digits of significand and an exponent range of −383 to +384, i.e. ±0.000000000000000×10 −383 to ±9.999999999999999×10 384. (Equivalently, ±0000000000000000×10 −398 to ±9999999999999999×10 369.) In contrast, the corresponding binary format, which is the most commonly used type, has an approximate range of ±0.000000000000001×10 −308 to ±1.797693134862315×10 308. Because the significand is not normalized, most values with less than 16 significant digits have multiple possible representations; 1×102=0.1×103=0.01×104, etc. Zero has 768 possible representations (1536 if you include both signed zeros).
Decimal64 floating point is a relatively new decimal floating-point format, formally introduced in the 2008 version of IEEE 754.
Representation of decimal64 values
IEEE 754 allows two alternative representation methods for decimal64 values. The standard does not specify how to signify which representation is used, for instance in a situation where decimal64 values are communicated between systems.
In one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer.
The other, alternative, representation method is based on densely packed decimal for most of the significand (except the most significant digit).
Both alternatives provide exactly the same range of representable numbers: 16 digits of significand and 3×28 = 768 possible exponent values.
In both cases, the most significant 4 bits of the significand (which actually only have 10 possible values) are combined with the most significant 2 bits of the exponent (3 possible values) to use 30 of the 32 possible values of a 5-bit field. The remaining combinations encode infinities and NaNs.
If the leading 4 bits of the significand is between 0 and 7, the number begins as follows
s 00mmm xxx Exponent begins with 00, significand with 0mmm s 01mmm xxx Exponent begins with 01, significand with 0mmm s 10mmm xxx Exponent begins with 10, significand with 0mmm
If the leading 4 bits of the significand are binary 1000 or 1001 (decimal 8 or 9), the number begins as follows:
s 1100m xxx Exponent begins with 00, significand with 100m s 1101m xxx Exponent begins with 01, significand with 100m s 1110m xxx Exponent begins with 10, significand with 100m
The following bits (xxx in the above) encode the additional exponent bits and the remainder of the most significant digit, but the details vary depending on the encoding alternative used. There is no particular reason for this difference, other than historical reasons in the eight-year long development of IEEE 754-2008.
The final combinations are used for infinities and NaNs, and are the same for both alternative encodings:
s 11110 x ±Infinity (see Extended real number line) s 11111 0 quiet NaN (sign bit ignored) s 11111 1 signaling NaN (sign bit ignored)
In the latter cases, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to NaNs by filling it with a single byte value.
Binary integer significand field
This format uses a binary significand from 0 to 1016−1 = 9999999999999999 = 2386F26FC0FFFF16 = 1000111000011011110010011011111100000011111111111111112. The encoding can represent binary significands up to 10×250−1 = 11258999068426239 = 27FFFFFFFFFFFF16, but values larger than 1016−1 are illegal (and the standard requires implementations to treat them as 0, if encountered on input).
As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7 (00002 to 01112), or higher (10002 or 10012).
If the 2 bits after the sign bit are "00", "01", or "10", then the exponent field consists of the 10 bits following the sign bit, and the significand is the remaining 53 bits, with an implicit leading 0 bit:
s 00eeeeeeee (0)TTTtttttttttttttttttt tttttttttttttttttttttttttttttttt s 01eeeeeeee (0)TTTtttttttttttttttttt tttttttttttttttttttttttttttttttt s 10eeeeeeee (0)TTTtttttttttttttttttt tttttttttttttttttttttttttttttttt
This includes subnormal numbers where the leading significand digit is 0.
If the 4 bits after the sign bit are "1100", "1101", or "1110", then the 10-bit exponent field is shifted 2 bits to the right (after both the sign bit and the "11" bits thereafter), and the represented significand is in the remaining 51 bits. In this case there is an implicit (that is, not stored) leading 3-bit sequence "100" in the true significand.
s 11 00eeeeeeee (100)Ttttttttttttttttttt tttttttttttttttttttttttttttttttt s 11 01eeeeeeee (100)Ttttttttttttttttttt tttttttttttttttttttttttttttttttt s 11 10eeeeeeee (100)Ttttttttttttttttttt tttttttttttttttttttttttttttttttt
The "11" 2-bit sequence after the sign bit indicates that there is an implicit "100" 3-bit prefix to the significand. Compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the "00", "01", or "10" bits are part of the exponent field.
Note that the leading bits of the significand field do not encode the most significant decimal digit; they are simply part of a larger pure-binary number. For example, a significand of 8000000000000000 is encoded as binary 011100011010111111010100100110001101000000000000000000, with the leading 4 bits encoding 7; the first significand which requires a 54th bit is 253 = 9007199254740992.
In the above cases, the value represented is
- (−1)sign × 10exponent−398 × significand
If the four bits after the sign bit are "1111" then the value is an infinity or a NaN, as described above:
s 11110 xx...x ±infinity s 11111 0x...x a quiet NaN s 11111 1x...x a signalling NaN
ecec
See also
- IEEE Standard for Floating-Point Arithmetic (IEEE 754)
- ISO/IEC 10967, Language Independent Arithmetic
- Primitive data type