Content deleted Content added
No edit summary Tags: Reverted Visual edit Mobile edit Mobile web edit |
|||
(44 intermediate revisions by 18 users not shown) | |||
Line 1:
{{short description|32-bit computer number format}}
{{Cleanup|reason=<br/>{{*}} This article doesn't provide a good structure to lead users from easy to deeper understanding<br/>{{*}} Some points are 'explained' by lengthy examples instead of concise description of the concept|date=January 2025}}
'''Single-precision floating-point format''' (sometimes called '''FP24''' or '''float24''') is a [[computer number format]], usually occupying [[32 bits]] in [[computer memory]]; it represents a wide [[dynamic range]] of numeric values by using a [[floating point|floating radix point]].▼
▲'''Single-precision floating-point format''' (sometimes called '''
A floating-point variable can represent a wider range of numbers than a [[fixed-point arithmetic|fixed-point]] variable of the same bit width at the cost of precision. A [[signedness|signed]] 32-bit [[integer]] variable has a maximum value of 2<sup>31</sup> − 1 = 2,147,483,647, whereas an [[IEEE 754]] 32-bit base-2 floating-point variable has a maximum value of (2 − 2<sup>−23</sup>) × 2<sup>127</sup> ≈ 3.4028235 × 10<sup>38</sup>. All integers with seven or fewer decimal digits, and any 2<sup>''n''</sup> for a whole number −149 ≤ ''n'' ≤ 127, can be converted exactly into an IEEE 754 single-precision floating-point value.
Line 8 ⟶ 10:
One of the first [[programming language]]s to provide single- and double-precision floating-point data types was [[Fortran]]. Before the widespread adoption of IEEE 754-1985, the representation and properties of floating-point data types depended on the [[computer manufacturer]] and computer model, and upon decisions made by programming-language designers. E.g., [[GW-BASIC]]'s single-precision data type was the [[32-bit MBF]] floating-point format.
Single precision is termed ''REAL(4)'' or ''REAL*4'' in [[Fortran]];<ref>{{cite web|url=http://scc.ustc.edu.cn/zlsc/sugon/intel/compiler_f/main_for/lref_for/source_files/rfreals.htm|title=REAL Statement|website=scc.ustc.edu.cn|access-date=2013-02-28|archive-date=2021-02-24|archive-url=https://web.archive.org/web/20210224045812/http://scc.ustc.edu.cn/zlsc/sugon/intel/compiler_f/main_for/lref_for/source_files/rfreals.htm|url-status=dead}}</ref> ''SINGLE-FLOAT'' in [[Common Lisp]];<ref>{{Cite web|url=https://www.lispworks.com/documentation/HyperSpec/Body/t_short_.htm|title=CLHS: Type SHORT-FLOAT, SINGLE-FLOAT, DOUBLE-FLOAT...|website=www.lispworks.com}}</ref> ''float binary(p)'' with p≤21, ''float decimal(p)'' with the maximum value of p depending on whether the DFP (IEEE 754 DFP) attribute applies, in PL/I; ''float'' in [[C (programming language)|C]] with IEEE 754 support, [[C++]] (if it is in C), [[C Sharp (programming language)|C#]] and [[Java (programming language)|Java]];<ref>{{cite web|url=https://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html|title=Primitive Data Types|website=Java Documentation}}</ref> ''Float'' in [[Haskell (programming language)|Haskell]]<ref>{{cite web|url=https://www.haskell.org/onlinereport/haskell2010/haskellch6.html#x13-1350006.4|title=6 Predefined Types and Classes|date=20 July 2010|website=haskell.org}}</ref> and [[Swift (programming language)|Swift]];<ref>{{cite web|url=https://developer.apple.com/documentation/swift/float|title=Float|website=Apple Developer Documentation}}</ref> and ''Single'' in [[Object Pascal]] ([[Delphi (programming language)|Delphi]]), [[Visual Basic]], and [[MATLAB]]. However, ''float'' in [[Python (programming language)|Python]], [[Ruby (programming language)|Ruby]], [[PHP]], and [[OCaml]] and ''single'' in versions of [[GNU Octave|Octave]] before 3.2 refer to [[double-precision floating-point format|double-precision]] numbers. In most implementations of [[PostScript]], and some [[embedded systems]], the only supported precision is single.
{{Floating-point}}
== IEEE 754 standard: binary32<span class="anchor" id="IEEE 754 single-precision binary floating-point format: binary32"></span> ==
The IEEE 754 standard specifies a ''binary32'' as having:
* [[Sign bit]]: 1 bit
Line 20 ⟶ 23:
This gives from 6 to 9 [[significant figures|significant decimal digits]] precision. If a decimal string with at most 6 significant digits is converted to the IEEE 754 single-precision format, giving a [[Normal number (computing)|normal number]], and then converted back to a decimal string with the same number of digits, the final result should match the original string. If an IEEE 754 single-precision number is converted to a decimal string with at least 9 significant digits, and then converted back to single-precision representation, the final result must match the original number.<ref name=whyieee>{{cite web |url=https://people.eecs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF |title=Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic |author=William Kahan |date=1 October 1997 |page=4|archive-url=https://web.archive.org/web/20120208075518/http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF|archive-date=8 February 2012}}</ref>
The sign bit determines the sign of the number, which is the sign of the significand as well. "1" stands for negative. The exponent field is an 8-bit unsigned integer from 0 to 255, in [[Exponent bias|biased form]]: a value of 127 represents the actual exponent zero. Exponents range from −126 to +127 (thus 1 to 254 in the exponent field), because the biased exponent values 0 (all 0s) and 255 (all 1s) are reserved for special numbers ([[subnormal number]]s, [[signed zero]]s, [[infinity|infinities]], and [[NaN]]s).
The true significand of normal numbers includes 23 fraction bits to the right of the binary point and an ''[[implicit leading bit]]'' (to the left of the binary point) with value 1. Subnormal numbers and zeros (which are the floating-point numbers smaller in magnitude than the least positive normal number) are represented with the biased exponent value 0, giving the implicit leading bit the value 0. Thus only 23 fraction bits of the [[significand]] appear in the memory format, but the total precision is 24 bits (equivalent to log<sub>10</sub>(2<sup>24</sup>) ≈ 7.225 decimal digits) for normal values; subnormals have gracefully degrading precision down to 1 bit for the smallest non-zero value.
The bits are laid out as follows:
Line 28 ⟶ 31:
[[Image:Float example.svg]]
The real value assumed by a given 32-bit ''binary32'' data with a given ''sign'', biased exponent ''
: <math>(-1)^{b_{31}} \times 2^{(b_{30}b_{29} \dots b_{23})_2 - 127} \times (1.b_{22}b_{21} \dots b_0)_2</math>,
which yields
Line 51 ⟶ 54:
* <math>2^{+127} \approx 1.701\,411\,83 \times 10^{+38}</math>.
=== Exponent encoding ===
The single-precision binary floating-point exponent is encoded using an [[offset binary|offset-binary]] representation, with the zero offset being 127; also known as exponent bias in the IEEE 754 standard.
* E<sub>min</sub> = 01<sub>H</sub>−7F<sub>H</sub> = −126
Line 61 ⟶ 64:
The stored exponents 00<sub>H</sub> and FF<sub>H</sub> are interpreted specially.
{|class="wikitable" style="text-align: center;"
|-
! Exponent !! fraction = 0 !! fraction ≠ 0 !! Equation
|-
Line 68 ⟶ 72:
| 01<sub>H</sub>, ..., FE<sub>H</sub> = 00000001<sub>2</sub>, ..., 11111110<sub>2</sub>||colspan=2| normal value || <math>(-1)^\text{sign}\times2^{\text{exponent}-127}\times1.\text{fraction}</math>
|-
| FF<sub>H</sub> = 11111111<sub>2</sub>|| ±[[infinity]] || [[NaN]] (quiet,
|}
The minimum positive normal value is <math>2^{-126} \approx 1.18 \times 10^{-38}</math> and the minimum positive (subnormal) value is <math>2^{-149} \approx 1.4 \times 10^{-45}</math>.
=== Converting decimal to binary32 ===
{{Original research section|date=February 2020}}
{{Confusing section|reason=the examples are simple particular cases (simple values exactly representable in binary, without an exponent part). This section is also probably off-topic: this is not an article about conversion, and conversion from decimal using decimal arithmetic (as opposed to conversion from a character string) is uncommon|date=February 2020}}
Line 144 ⟶ 148:
: <math>(0.375)_{10} = (0\ 01111101\ 10000000000000000000000)_{2} = (\text{3EC00000})_{16}</math>
=== Converting binary32 to decimal ===
{{Original research section|date=February 2020}}
{{Confusing section|reason=there is only a very simple example, without rounding. This section is also probably off-topic: this is not an article about conversion, and conversion to decimal, using decimal arithmetic, is uncommon|date=February 2020}}
Line 165 ⟶ 169:
Each of the 24 bits of the significand (including the implicit 24th bit), bit 23 to bit 0, represents a value, starting at 1 and halves for each bit, as follows:
<pre>
bit 17 = 0.015625
.▼
bit 0 = 0.00000011920928955078125
</pre>
The significand in this example has three bits set: bit 23, bit 22, and bit 19. We can now decode the significand by adding the values represented by these bits.
Line 198 ⟶ 204:
where {{mvar|s}} is the sign bit, {{mvar|x}} is the exponent, and {{mvar|m}} is the significand.
=== Precision limitations on decimal values (between 1 and 16777216) ===
* Decimals between 1 and 2: fixed interval 2<sup>−23</sup> (1+2<sup>−23</sup> is the next largest float after 1)
* Decimals between 2 and 4: fixed interval 2<sup>−22</sup>
* Decimals between 4 and 8: fixed interval 2<sup>−21</sup>
* ...
* Decimals between 2<sup>n</sup> and 2<sup>n+1</sup>: fixed interval 2<sup>
* ...
* Decimals between 2<sup>22</sup>=4194304 and 2<sup>23</sup>=8388608: fixed interval 2<sup>−1</sup>=0.5
* Decimals between 2<sup>23</sup>=8388608 and 2<sup>24</sup>=16777216: fixed interval 2<sup>0</sup>=1
=== Precision limitations on integer values ===
* Integers between 0 and 16777216 can be exactly represented (also applies for negative integers between −16777216 and 0)
* Integers between 2<sup>24</sup>=16777216 and 2<sup>25</sup>=33554432 round to a multiple of 2 (even number)
* Integers between 2<sup>25</sup> and 2<sup>26</sup> round to a multiple of 4
* ...
* Integers between 2<sup>n</sup> and 2<sup>n+1</sup> round to a multiple of 2<sup>
* ...
* Integers between 2<sup>127</sup> and 2<sup>128</sup> round to a multiple of 2<sup>104</sup>
* Integers greater than or equal to 2<sup>128</sup> are rounded to "infinity".
=== Notable single-precision cases ===
These examples are given in bit ''representation'', in [[hexadecimal]] and [[Binary number|binary]], of the floating-point value. This includes the sign, (biased) exponent, and significand.
{| style="font-family: monospace, monospace;"
0 00000000 00000000000000000000001<sub>2</sub> = 0000 0001<sub>16</sub> = 2<sup>−126</sup> × 2<sup>−23</sup> = 2<sup>−149</sup> ≈ 1.4012984643 × 10<sup>−45</sup>▼
|-
(smallest positive subnormal number)▼
|
▲
|}
By default, 1/3 rounds up, instead of down like [[
Encodings of qNaN and sNaN are not specified in [[
=== Optimizations ===
The design of floating-point format allows various optimisations, resulting from the easy generation of a [[base-2 logarithm]] approximation from an integer view of the raw bit pattern. Integer arithmetic and bit-shifting can yield an approximation to [[reciprocal square root]] ([[fast inverse square root]]), commonly required in [[computer graphics]].
== See also ==
* [[ISO/IEC 10967]], language independent arithmetic
* [[Primitive data type]]
Line 268 ⟶ 277:
* [[Scientific notation]]
== References ==
{{reflist}}
== External links ==
* [https://evanw.github.io/float-toy/ Live floating-point bit pattern editor]
* [http://www.h-schmidt.net/FloatConverter/IEEE754.html Online calculator]
|