#REDIRECT [[IEEE floating point]]
<!-- Please do not remove or change this AfD message until the issue is settled -->
{{R to related topic}}
{{Article for deletion/dated|page=Octuple-precision floating-point format|timestamp=20150623012026|year=2015|month=June|day=23|substed=yes|help=off}}
<!-- Once discussion is closed, please place on talk page: {{Old AfD multi|page=Octuple-precision floating-point format|date=23 June 2015|result='''keep'''}} -->
<!-- End of AfD message, feel free to edit beyond this point -->
In [[computing]], '''octuple precision''' is a binary [[floating-point]]-based [[computer number format]] that occupies 32 [[byte]]s (256 [[bit]]s) in computer memory. This 256-[[bit]] octuple precision is for applications requiring results in higher than [[quadruple precision]]. This format is rarely (if ever) used and very few things support it.
{{Floating-point}}
== IEEE 754 octuple-precision binary floating-point format: binary256 ==
In its 2008 revision, the [[IEEE 754]] standard specifies a '''binary256''' format among the ''interchange formats'' (it is not a basic format), as having:
* [[Sign bit]]: 1 bit
* [[Exponent]] width: 19 bits
* [[Significand]] [[precision (arithmetic)|precision]]: 237 bits (236 explicitly stored)
<!-- "significand", with a d at the end, is a technical term, please do not confuse with "significant" -->
The format is written with an implicit lead bit with value 1 unless the exponent is all zeros. Thus only 236 bits of the [[significand]] appear in the memory format, but the total precision is 237 bits (approximately 71 decimal digits: {{nowrap|log<sub>10</sub>(2<sup>237</sup>) ≈ 71.344}}).
<!-- (Commented out since the image is incorrect; it could be re-added once corrected.)-->
The bits are laid out as follows:
[[File:Octuple persision visual demontration.png|1000px|Layout of octuple precision floating point format]]
=== Exponent encoding ===
The octuple-precision binary floating-point exponent is encoded using an [[offset binary]] representation, with the zero offset being 262143; also known as exponent bias in the IEEE 754 standard.
* E<sub>min</sub> = −262142
* E<sub>max</sub> = 262143
* [[Exponent bias]] = 3FFFF<sub>16</sub> = 262143
Thus, as defined by the offset binary representation, in order to get the true exponent the offset of 16383 has to be subtracted from the stored exponent.
The stored exponents 00000<sub>16</sub> and 7FFFF<sub>16</sub> are interpreted specially.
{|class="wikitable" style="text-align:center"
! Exponent !! Significand zero !! Significand non-zero !! Equation
|-
| 00000<sub>16</sub> || [[0 (number)|0]], [[−0]] || [[subnormal numbers]] || <math>(-1)^{\text{signbit}} \times 2^{-262142} \times 0.\text{significandbits}_2</math>
|-
| 00001<sub>16</sub>, ..., 7FFFE<sub>16</sub> ||colspan=2| normalized value || <math>(-1)^{\text{signbit}} \times 2^{{\text{exponentbits}_2} - 262143} \times 1.\text{significandbits}_2</math>
|-
| 7FFFF<sub>16</sub> || ±[[infinity|∞]] || [[NaN]] (quiet, signalling)
|}
The minimum strictly positive (subnormal) value is {{nowrap|2<sup>−262378</sup> ≈ 10<sup>−78984</sup>}} and has a precision of only one bit.
The minimum positive normal value is 2<sup>−262142</sup> ≈ 2.4824 × 10<sup>−78913</sup>.
The maximum representable value is 2<sup>262144</sup> − 2<sup>261907</sup> ≈ 1.6113 × 10<sup>78913</sup>.
=== Octuple-precision examples ===
These examples are given in bit ''representation'', in [[hexadecimal]],
of the floating-point value. This includes the sign, (biased) exponent, and significand.
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = +0
8000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = −0
7fff f000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = +infinity
ffff f000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = −infinity
By default, 1/3 rounds down like [[double precision]], because of the odd number of bits in the significand.
So the bits beyond the rounding point are <code>0101...</code> which is less than 1/2 of a [[unit in the last place]].
==Implementations==
Octuple precision is rarely implemented since usage of it is extremely rare. [[Apple Inc.]] had an implementation of addition, subtraction and multiplication of octuple-precision numbers with a 224-bit [[two's complement]] significand and a 32-bit exponent.<ref>{{cite web | url=http://images.apple.com/ca/acg/pdf/oct3a.pdf | title=Octuple-precision floating point on Apple G4 | author1=R. Crandall | author2=J. Papadopoulos | date=8 May 2002}}</ref> One can use general [[arbitrary-precision arithmetic]] libraries to obtain octuple (or higher) precision, but specialized octuple-precision implementations may achieve higher performance.
=== Hardware support ===
There is little to no hardware support for it. Octuple-precision arithmetic is too impractical for most commercial uses of it, making implementation of it very rare (if any).
==Processing Statistics==
Since an octuple precision numeral takes up 32 bytes of storage, the requirements in-order to transport this piece of data are as follows.
* [[8-bit|8-bit architecture]] – 32 separate packets<sup>1</sup> of information (at least) in order to transport this across the main data bus
* [[16-bit|x16 architecture]] – 16 separate packets of information (at least) in order to transport this across the main data bus
* [[x86|x86 architecture]] – 8 separate packets of information (at least) in order to transport this across the main data bus
* [[x86-64|x64 architecture]] – 4 separate packets of information (at least) in order to transport this across the main data bus
<br />
<sup>1</sup>statistical extrapolation since it would take 1/8 of the entire memory just to store 1 binary256 numeral, making it completely impractical.
== See also ==
* [[IEEE 754-2008|IEEE Standard for Floating-Point Arithmetic (IEEE 754)]]
* [[ISO/IEC 10967]], Language-independent arithmetic
* [[Primitive data type]]
== References ==
{{reflist}}
{{data types}}
[[Category:Binary arithmetic]]
[[Category:Floating point types]]
|