Floating-point arithmetic: Difference between revisions

Content deleted Content added
Undid revision 1267536381 by 223.185.33.146 (talk)
Cloudream (talk | contribs)
 
(49 intermediate revisions by 14 users not shown)
Line 5:
{{Floating-point}}
 
In [[computing]], '''floating-point arithmetic''' ('''FP''') is [[arithmetic]] on subsets of [[real number]]s formed by a ''[[significand]]'' (a [[Sign (mathematics)|signed]] sequence of a fixed number of digits in some [[Radix|base]],) called a [[significand]], scaledmultiplied by an integer [[exponentinteger power]] of that base.
Numbers of this form are called '''floating-point numbers'''.<ref name="Muller_2010"/>{{rp|3}}<ref name="sterbenz1974fpcomp">{{cite book
|last1=Sterbenz
Line 23:
In practice, most floating-point systems use [[Binary number|base two]], though base ten ([[decimal floating point]]) is also common.
 
Floating-point arithmetic operations, such as addition and division, approximate the corresponding real number arithmetic operations by [[rounding]] any result that is not a floating-point number itself to a nearby floating-point number.<ref name="Muller_2010"/>{{rp|22}}<ref name="sterbenz1974fpcomp"/>{{rp|10}}
For example, in a floating-point arithmetic with five base-ten digits, the sum 12.345 + 1.0001 = 13.3451 might be rounded to 13.345.
 
The term ''floating point'' refers to the fact that the number's [[radix point]] can "float" anywhere to the left, right, or between the [[significant digits]] of the number. This position is indicated by the exponent, so floating point can be considered a form of [[scientific notation]].
 
A floating-point system can be used to represent, with a fixed number of digits, numbers of very different [[Orders of magnitude (numbers)|orders of magnitude]] — such as the number of meters [[Orders of magnitude (length)#100_zettametres|between galaxies]] or [[Orders of magnitude (length)#10_femtometres|between protons in an atom]]. For this reason, floating-point arithmetic is often used to allow very small and very large real numbers that require fast processing times. The result of this [[dynamic range]] is that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent.<ref name="Smith_1997"/>
Line 37:
The speed of floating-point operations, commonly measured in terms of [[FLOPS]], is an important characteristic of a [[computer system]], especially for applications that involve intensive mathematical calculations.
 
AFloating-point numbers can be computed using software implementations (softfloat) or hardware implementations (hardfloat). [[floating-point unit|Floating-point units]] (FPUFPUs, colloquially a math [[coprocessor|coprocessors]]) is a part of a computer systemare specially designed to carry out operations on floating-point numbers and are part of most computer systems. When FPUs are not available, software implementations can be used instead.
 
== Overview ==
Line 45:
There are several mechanisms by which strings of digits can represent numbers. In standard mathematical notation, the digit string can be of any length, and the ___location of the [[radix point]] is indicated by placing an explicit [[Decimal separator|"point" character]] (dot or comma) there. If the radix point is not specified, then the string implicitly represents an [[integer]] and the unstated radix point would be off the right-hand end of the string, next to the least significant digit. In [[fixed-point arithmetic|fixed-point]] systems, a position in the string is specified for the radix point. So a fixed-point scheme might use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345.
 
In [[scientific notation]], the given number is scaled by a [[exponentiation|power of 10]], so that it lies within a specific range—typically between 1 and 10, with the radix point appearing immediately after the first digit. As a power of ten, the scaling factor is then indicated separately at the end of the number. For example, the orbital period of [[Jupiter]]'s moon [[Io (moon)|Io]] is {{val|152853.5047|fmt=commas}} seconds, a value that would be represented in standard-form scientific notation as {{val|1.528535047|e=5|fmt=commas}} seconds.
 
Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of:
* A signed (meaning positive or negative) digit string of a given length in a given [[base (exponentiation)|baseradix]] (or [[radix]]base). This digit string is referred to as the ''[[significand]]'', ''mantissa'', or ''coefficient''.<ref group="nb" name="NB_Significand"/> The length of the significand determines the ''precision'' to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit, or to the right of the rightmost (least significant) digit. This article generally follows the convention that the radix point is set just after the most significant (leftmost) digit.
* A signed integer [[exponent]] (also referred to as the ''characteristic'', or ''scale''),<ref group="nb" name="NB_Exponent"/> which modifies the magnitude of the number.
 
Line 73:
When this is stored in memory using the IEEE 754 encoding, this becomes the [[significand]] {{mvar|s}}. The significand is assumed to have a binary point to the right of the leftmost bit. So, the binary representation of π is calculated from left-to-right as follows:
<math display=block>\begin{align}
&\leftbiggl(\sum_{n=0}^{p-1} \text{bit}_n \times 2^{-n}\rightbiggr) \times 2^e \\
=&\qquad{}= &\left(1 \times 2^{-0} + 1 \times 2^{-1} + 0 \times 2^{-2} + 0 \times 2^{-3} + 1 \times2^{-4} + \cdots + 1 \times 2^{-23}\right) \times 2^1 \\[2mu]
&\approxqquad{} &\approx 1.57079637 \times 2 \\[3mu]
&\approxqquad{} &\approx 3.1415927
\end{align}</math><!-- Ensure correct rounding by taking one more digit for the intermediate decimal approximation. -->
 
Line 96:
 
[[File:Quevedo 1917.jpg|thumb|upright=0.7|[[Leonardo Torres Quevedo]], in 1914, published an analysis of floating point based on the [[analytical engine]].]]
In 1914, the Spanish engineer [[Leonardo Torres Quevedo]] published ''Essays on Automatics'',<ref>Torres Quevedo, Leonardo. [https://quickclick.es/rop/pdf/publico/1914/1914_tomoI_2043_01.pdf Automática: Complemento de la Teoría de las Máquinas, (pdf)], pp. 575–583, Revista de Obras Públicas, 19 November 1914.</ref> where he designed a special-purpose electromechanical calculator based on [[Charles Babbage]]'s [[analytical engine]] and described a way to store floating-point numbers in a consistent manner. He stated that numbers will be stored in exponential format as ''n'' x× 10<math>^m</math>, and offered three rules by which consistent manipulation of floating-point numbers by machines could be implemented. For Torres, "''n'' will always be the same number of [[Numerical digit|digits]] (e.g. six), the first digit of ''n'' will be of order of tenths, the second of hundredths, etc, and one will write each quantity in the form: ''n''; ''m''." The format he proposed shows the need for a fixed-sized significand as is presently used for floating-point data, fixing the ___location of the decimal point in the significand so that each representation was unique, and how to format such numbers by specifying a syntax to be used that could be entered through a [[typewriter]], as was the case of his [[Leonardo Torres y Quevedo#Analytical machines|Electromechanical Arithmometer]] in 1920.<ref>Ronald T. Kneusel. ''[https://books.google.com/books?id=eq4ZDgAAQBAJ&dq=leonardo+torres+quevedo++electromechanical+machine+essays&pg=PA84 Numbers and Computers],'' Springer, pp. 84–85, 2017. {{ISBN|978-3319505084}}</ref>{{Sfn|Randell|1982|pp=6, 11–13}}<ref>Randell, Brian. [https://dl.acm.org/doi/pdf/10.5555/1074100.1074334 Digital Computers, History of Origins, (pdf)], p. 545, Digital Computers: Origins, Encyclopedia of Computer Science, January 2003.</ref>
 
[[File:Konrad Zuse (1992).jpg|thumb|upright=0.7|right|[[Konrad Zuse]], architect of the [[Z3 (computer)|Z3]] computer, which uses a 22-bit binary floating-point representation]]
Line 122:
In 1989, mathematician and computer scientist [[William Kahan]] was honored with the [[Turing Award]] for being the primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor, [[Harold S. Stone|Harold Stone]].<ref name="Severance_1998"/>
 
Among the x86 (more specifically i8087) innovations are these:
* A precisely specified floating-point representation at the bit-string level, so that all compliant computers interpret bit patterns the same way. This makes it possible to accurately and efficiently transfer floating-point numbers from one computer to another (after accounting for [[endianness]]).
* A precisely specified behavior for the arithmetic operations: A result is required to be produced as if infinitely precise arithmetic were used to yield a value that is then rounded according to specific rules. This means that a compliant computer program would always produce the same result when given a particular input, thus mitigating the almost mystical reputation that floating-point computation had developed for its hitherto seemingly non-deterministic behavior.
* The ability of [[IEEE 754#Exception handling|exceptional conditions]] (overflow, [[Division by zero|divide by zero]], etc.) to propagate through a computation in a benign manner and then be handled by the software in a controlled fashion.
 
These features would be inherited into IEEE 754-1985 (with the exception of the encoding of special values and exceptions), though the extended internal precision of x87 means it requires explicit rounding of exact results directly to the destination precision in order to match standard IEEE 754 results.<ref name="Goldberg_1991"/> However, the behavior may not be the same as a rounding to the destination format due to a possible wider exponent range of the extended format.
 
== Range of floating-point numbers ==
Line 177 ⟶ 179:
 
=== Internal representation ===
Floating-point numbers are typically packed into a computer datum as the sign bit, the exponent field, and thea significandfield orfor mantissathe significand, from left to right. For the [[IEEE 754]] binary formats (basic and extended) whichthat have extant hardware implementations, they are apportioned as follows:
 
{| class="wikitable" style="text-align:right; border:0"
|-
!rowspan="2" |TypeFormat
!colspan="4" |Bits for the encoding<!-- Since this is about the encoding, it should be clear that the number given for the significand below excludes the implicit bit, when this is used. -->
!colspan="4" |Bits
| rowspan="78" style="background:white; border:0"|
!rowspan="2" |Exponent<br>bias
!rowspan="2" |Bits<br>precision
Line 193 ⟶ 195:
!Total
|-
|[[Half-precision precisionfloating-point format|Half]] ([[IEEE floating point|IEEE 754-2008]]binary16)
|1
|5
Line 202 ⟶ 204:
|~3.3
|-
|[[Single -precision floating-point format|Single]] (binary32)
|1
|8
Line 211 ⟶ 213:
|~7.2
|-
|[[Double -precision floating-point format|Double]] (binary64)
|1
|11
Line 220 ⟶ 222:
|~15.9
|-
|[[Extended precision#x86 extended -precision format|x86 extended precision]]
|1
|15
Line 229 ⟶ 231:
|~19.2
|-
|[[Quad Quadruple-precision floating-point format|QuadQuadruple]] (binary128)
|1
|15
Line 237 ⟶ 239:
|113
|~34.0
|-
|[[Octuple-precision floating-point format|Octuple]] (binary256)
|1
|19
|236
|256
|262143
|237
|~71.3
|}
 
While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed "bias" added to it. Values of all 0s in this field are reserved for the zeros and [[subnormal numbersnumber]]s; values of all 1s are reserved for the infinities and NaNs. The exponent range for normal numbers is [−126, 127] for single precision, [−1022, 1023] for double, or [−16382, 16383] for quad. Normal numbers exclude subnormal values, zeros, infinities, and NaNs.
 
In the IEEE binary interchange formats the leading 1 bit of a normalized significand is not actually stored in the computer datum, since it is always 1. It is called the "hidden" or "implicit" bit. Because of this, the single-precision format actually has a significand with 24 bits of precision, the double-precision format has 53, and quad has 113, and octuple has 237.
 
For example, it was shown above that π, rounded to 24 bits of precision, has:
Line 254 ⟶ 265:
== Other notable floating-point formats ==
In addition to the widely used [[IEEE 754]] standard formats, other floating-point formats are used, or have been used, in certain ___domain-specific areas.
* The [[Microsoft Binary Format|Microsoft Binary Format (MBF)]] was developed for the Microsoft BASIC language products, including Microsoft's first ever product the [[Altair BASIC]] (1975), [[TRS-80|TRS-80 LEVEL II]], [[CP/M]]'s [[MBASIC]], [[IBM PC 5150]]'s [[BASICA]], [[MS-DOS]]'s [[GW-BASIC]] and [[QuickBASIC]] prior to version 4.00. QuickBASIC version 4.00 and 4.50 switched to the IEEE 754-1985 format but can revert to the MBF format using the /MBF command option. MBF was designed and developed on a simulated [[Intel 8080]] by [[Monte Davidoff]], a dormmate of [[Bill Gates]], during spring of 1975 for the [[MITS Altair 8800]]. The initial release of July 1975 supported a single-precision (32 bits) format due to cost of the [[MITS Altair 8800]] 4-kilobytes memory. In December 1975, the 8-kilobytes version added a double-precision (64 bits) format. A single-precision (40 bits) variant format was adopted for other CPU's, notably the [[MOS 6502]] ([[Apple //II]], [[Commodore PET]], [[Atari]]), [[Motorola 6800]] (MITS Altair 680) and [[Motorola 6809]] ([[TRS-80 Color Computer]]). All Microsoft language products from 1975 through 1987 used the [[Microsoft Binary Format]] until Microsoft adopted the IEEE- 754 standard format in all its products starting in 1988 to their current releases. MBF consists of the MBF single-precision format (32 bits, "6-digit BASIC"),<ref name="Borland_1994_MBF"/><ref name="Steil_2008_6502"/> the MBF extended-precision format (40 bits, "9<!-- is it really 9 digits, not 8? -->-digit BASIC"),<ref name="Steil_2008_6502"/> and the MBF double-precision format (64 bits);<ref name="Borland_1994_MBF"/><ref name="Microsoft_2006_KB35826"/> each of them is represented with an 8-bit exponent, followed by a sign bit, followed by a significand of respectively 23, 31, and 55 bits.
* The [[Bfloat16bfloat16 floating-point format|Bfloat16bfloat16 format]] requires the same amount of memory (16 bits) as the [[Half-precision floating-point format|IEEE 754 half-precision format]], but allocates 8 bits to the exponent instead of 5, thus providing the same range as a [[Single-precision floating-point format|IEEE 754 single-precision]] number. The tradeoff is a reduced precision, as the trailing significand field is reduced from 10 to 7 bits. This format is mainly used in the training of [[machine learning]] models, where range is more valuable than precision. Many machine learning accelerators provide hardware support for this format.
* The TensorFloat-32<ref name="Kharya_2020"/> format combines the 8 bits of exponent of the Bfloat16bfloat16 with the 10 bits of trailing significand field of half-precision formats, resulting in a size of 19 bits. This format was introduced by [[Nvidia]], which provides hardware support for it in the Tensor Cores of its [[Graphics processing unit|GPUs]] based on the Nvidia Ampere architecture. The drawback of this format is its size, which is not a power of 2. However, according to Nvidia, this format should only be used internally by hardware to speed up computations, while inputs and outputs should be stored in the 32-bit single-precision IEEE 754 format.<ref name="Kharya_2020"/>
* The [[Hopper (microarchitecture)|Hopper]] and [[CDNA 3]] architecture GPUs provide two FP8 formats: one with the same numerical range as half-precision (E5M2) and one with higher precision, but less range (E4M3).<ref name="NVIDIA_Hopper"/><ref name="Micikevicius_2022"/>
* The [[Blackwell (microarchitecture)|Blackwell]] and [[CDNA (microarchitecture)|CDNA 4]] GPU architecture includes support for FP6 (E3M2 and E2M3) and FP4 (E2M1) formats. FP4 is the smallest floating-point format which allows for all IEEE 754 principles (see [[minifloat]]).
 
{| class="wikitable"
|+ Comparison of common floating-point formats
|+ Bfloat16, TensorFloat-32, and the two FP8 formats, compared with IEEE&nbsp;754 half-precision and single-precision formats
!Type
!Sign
!Exponent
!Significand
!Trailing significand field
!Total bits
|-
|FP4
|1
|2
|1
|4
|-
|FP6 (E2M3)
|1
|2
|3
|6
|-
|FP6 (E3M2)
|1
|3
|2
|6
|-
|FP8 (E4M3)
Line 285 ⟶ 315:
|16
|-
|[[Bfloat16bfloat16 floating-point format|Bfloat16bfloat16]]
|1
|8
Line 291 ⟶ 321:
|16
|-
|[[TensorFloat-32]]
|1
|8
Line 302 ⟶ 332:
|23
|32
|-
|[[Double-precision floating-point format|Double-precision]]
|1
|11
|52
|64
|-
|[[Quadruple-precision floating-point format|Quadruple-precision]]
|1
|15
|112
|128
|-
|[[Octuple-precision floating-point format|Octuple-precision]]
|1
|19
|236
|256
|}
 
==Representable numbers, conversion and rounding {{anchor|Representable numbers}}==
 
By their nature, all numbers expressed in floating-point format are [[rational number]]s with a terminating expansion in the relevant base (for example, a terminating decimal expansion in base-10, or a terminating binary expansion in base-2). Irrational numbers, such as [[Pi|π]] or √2<math display=inline>\sqrt{2}</math>, or non-terminating rational numbers, must be approximated. The number of digits (or bits) of precision also limits the set of rational numbers that can be represented exactly. For example, the decimal number 123456789 cannot be exactly represented if only eight decimal digits of precision are available (it would be rounded to one of the two straddling representable values, 12345678&nbsp;×&nbsp;10<sup>1</sup> or 12345679&nbsp;×&nbsp;10<sup>1</sup>), the same applies to [[Repeating decimal|non-terminating digits]] (.{{overline|5}} to be rounded to either .55555555 or .55555556).
 
When a number is represented in some format (such as a character string) which is not a native floating-point representation supported in a computer implementation, then it will require a conversion before it can be used in that implementation. If the number can be represented exactly in the floating-point format then the conversion is exact. If there is not an exact representation then the conversion requires a choice of which floating-point number to use to represent the original value. The representation chosen will have a different value from the original, and the value thus adjusted is called the ''rounded value''.
Line 333 ⟶ 381:
The result of rounding differs from the true value by about 0.03 parts per million, and matches the decimal representation of π in the first 7 digits. The difference is the [[discretization error]] and is limited by the [[machine epsilon]].
 
The arithmetical difference between two consecutive representable floating-point numbers which have the same exponent is called a [[unit in the last place]] (ULP). For example, if there is no representable number lying between the representable numbers 1.45a70c2245A70C22<sub>hex16</sub> and 1.45a70c2445A70C24<sub>hex16</sub>, the ULP is 2×16<sup>−8</sup>, or 2<sup>−31</sup>. For numbers with a base-2 exponent part of 0, i.e. numbers with an absolute value higher than or equal to 1 but lower than 2, an ULP is exactly 2<sup>−23</sup> or about 10<sup>−7</sup> in single precision, and exactly 2<sup>−53</sup> or about 10<sup>−16</sup> in double precision. The mandated behavior of IEEE-compliant hardware is that the result be within one-half of a ULP.
 
=== Rounding modes ===
Line 527 ⟶ 575:
 
=== Incidents ===
* On 25 February 1991, a [[loss of significance]] in a [[MIM-104 Patriot]] missile battery [[MIM-104 Patriot#Failure at Dhahran|prevented it from intercepting]] an incoming [[Al Hussein (missile)|Scud]] missile in [[Dhahran]], [[Saudi Arabia]], contributing to the death of 28 soldiers from the U.S. Army's [[14th Quartermaster Detachment]].<ref name="GAO report IMTEC 92-26"/> The errorweapons wascontrol actuallycomputer introducedcounted time in an integer number of tenths of a second since boot. For conversion to a floating-point number of seconds in velocity and position calculations, the software originally multiplied this number by a 24-bit [[Fixed-point arithmetic|fixed-point]] computationbinary approximation to 0.1, specifically <math display="block">0.00011001100110011001100_2 = 0.1 \times (1 - 2^{-20}).</math> Some parts of the software were later adapted to use a more accurate conversion to floating-point, but some parts were not updated and still used the 24-bit approximation.<ref name="Skeel"/> but These parts of the underlyingsoftware issuedrifted wouldfrom haveone beenanother by about 3.43&nbsp;milliseconds per hour. After 20 hours, the samediscrepancy withof floatingabout 68.7&nbsp;ms was enough for the radar tracking system to lose track of Scuds; the control system in the Dhahran missile battery had been running for about 100 hours when it failed to track and intercept an incoming Scud.<ref name="GAO report IMTEC 92-26"/> The failure to intercept arose not from using floating point arithmeticspecifically, but from subtracting two different approximations to unit conversion with different errors when representing time, so the unit conversion error in the difference did not cancel out but rather grew indefinitely with uptime.<ref name="Skeel"/>
* {{Clarify|date=November 2024|reason=It is not clear how this is an incident (the section title may have to be modified to cover more than incidents) and how this is due to floating-point arithmetic (rather than number approximations in general). The term '"invisible'" may also be misleading without following explanations. |text=[[Salami slicing tactics#Financial schemes|Salami slicing]] is the practice of removing the 'invisible' part of a transaction into a separate account.}}
 
=== Machine precision and backward error analysis ===
Line 598 ⟶ 646:
Expectations from mathematics may not be realized in the field of floating-point computation. For example, it is known that <math>(x+y)(x-y) = x^2-y^2\,</math>, and that <math>\sin^2{\theta}+\cos^2{\theta} = 1\,</math>, however these facts cannot be relied on when the quantities involved are the result of floating-point computation.
 
The use of the equality test (<code>if (x==y) ...</code>) requires care when dealing with floating-point numbers. Even simple expressions like <code>0.6/0.2-3==0</code> will, on most computers, fail to be true<ref name="Christiansen_Perl"/> (in IEEE 754 double precision, for example, <code>0.6/0.2 - 3</code> is approximately equal to {{val|-4.44089209850063e-16}}). Consequently, such tests are sometimes replaced with "fuzzy" comparisons (<code>if (abs(x-y) < epsilon) ...</code>, where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies greatly, and can require numerical analysis to bound epsilon.<ref name="Higham_2002"/> Values derived from the primary data representation and their comparisons should be performed in a wider, extended, precision to minimize the risk of such inconsistencies due to round-off errors.<ref name="Kahan_2000_Marketing"/> It is often better to organize the code in such a way that such tests are unnecessary. For example, in [[computational geometry]], exact tests of whether a point lies off or on a line or plane defined by other points can be performed using adaptive precision or exact arithmetic methods.<ref name="Shewchuk"/>
 
Small errors in floating-point arithmetic can grow when mathematical algorithms perform operations an enormous number of times. A few examples are [[matrix inversion]], [[eigenvector]] computation, and differential equation solving. These algorithms must be very carefully designed, using numerical approaches such as [[iterative refinement]], if they are to work well.<ref name="Kahan_1997_Cantilever"/>
Line 668 ⟶ 716:
* [[Coprocessor]]
* [[Decimal floating point]]
* [[Double -precision floating-point format]]
* [[Experimental mathematics]] – utilizes high precision floating-point computations
* [[Fixed-point arithmetic]]
Line 685 ⟶ 733:
* [[Significant figures]]
* [[Single-precision floating-point format]]
* [[Standard Apple Numerics Environment|Standard Apple Numerics Environment (SANE)]]
{{div col end}}
 
Line 722 ⟶ 771:
<ref name="OpenEXR-half">{{cite web |url=https://openexr.com/en/latest/TechnicalIntroduction.html#the-half-data-type |title=Technical Introduction to OpenEXR – The half Data Type |publisher=openEXR |access-date=2024-04-16}}</ref>
<ref name="IEEE-754_Analysis">{{cite web|url=https://christophervickery.com/IEEE-754/|title=IEEE-754 Analysis|access-date=2024-08-29}}</ref>
<ref name="Goldberg_1991">{{cite journal |first=David |last=Goldberg |author-link=David Goldberg (PARC) |title=What Every Computer Scientist Should Know About Floating-Point Arithmetic |journal=[[ACM Computing Surveys]] |date=March 1991 |volume=23 |issue=1 |pages=5–48 |doi=10.1145/103162.103163 |doi-access=free |s2cid=222008826}} (With the addendum "Differences Among IEEE 754 Implementations": [https://web.archive.org/web/20171011072644/http://www.cse.msu.edu/~cse320/Documents/FloatingPoint.pdf], [https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html])</ref>
<ref name="Harris">{{Cite journal |title=You're Going To Have To Think! |first=Richard |last=Harris |journal=[[Overload (magazine)|Overload]] |issue=99 |date=October 2010 |issn=1354-3172 |pages=5–10 |url=http://accu.org/index.php/journals/1702 |access-date=2011-09-24 |quote=Far more worrying is cancellation error which can yield catastrophic loss of precision.}} [http://accu.org/var/uploads/journals/overload99.pdf]</ref>
<ref name="GAO report IMTEC 92-26">{{cite web |url=http://www.gao.gov/products/IMTEC-92-26 |title=Patriot missile defense, Software problem led to system failure at Dharhan, Saudi Arabia |id=GAO report IMTEC 92-26 |publisher=[[US Government Accounting Office]]}}</ref>