Content deleted Content added
Anomalocaris (talk | contribs) m ''η{{'}}''; nonbreaking space before degree symbol (°) and unit abbrev (K, mL); minuses |
Anomalocaris (talk | contribs) m I assume asterisk (*) is supposed to be times (×), please revert if this is wrong (also nonbreaking space before unit abbrev (cm)) |
||
Line 6:
The '''approximation error''' in a given data value represents the significant discrepancy that arises when an exact, true value is compared against some [[approximation]] derived for it. This inherent error in approximation can be quantified and expressed in two principal ways: as an '''absolute error''', which denotes the direct numerical magnitude of this discrepancy irrespective of the true value's scale, or as a '''relative error''', which provides a scaled measure of the error by considering the absolute error in proportion to the exact data value, thus offering a context-dependent assessment of the error's significance.
An approximation error can manifest due to a multitude of diverse reasons. Prominent among these are limitations related to computing [[machine precision]], where digital systems cannot represent all real numbers with perfect accuracy, leading to unavoidable truncation or rounding. Another common source is inherent [[measurement error]], stemming from the practical limitations of instruments, environmental factors, or observational processes (for instance, if the actual length of a piece of paper is precisely 4.53
In the [[mathematics|mathematical]] field of [[numerical analysis]], the crucial concept of [[numerical stability]] associated with an [[algorithm]] serves to indicate the extent to which initial errors or perturbations present in the input data of the algorithm are likely to propagate and potentially amplify into substantial errors in the final output. Algorithms that are characterized as numerically stable are robust in the sense that they do not yield a significantly magnified error in their output even when the input is slightly malformed or contains minor inaccuracies; conversely, numerically unstable algorithms may exhibit dramatic error growth from small input changes, rendering their results unreliable.<ref>{{Cite web |last=Weisstein |first=Eric W. |title=Numerical Stability |url=https://mathworld.wolfram.com/ |access-date=2023-06-11 |website=mathworld.wolfram.com |language=en}}</ref>
Line 44:
In the realm of computational complexity theory, we define that a real value ''v'' is '''polynomially computable with absolute error''' from a given input if, for any specified rational number ''ε'' > 0 representing the desired maximum permissible absolute error, it is algorithmically possible to compute a rational number ''v''<sub>approx</sub> such that ''v''<sub>approx</sub> approximates ''v'' with an absolute error no greater than ''ε'' (formally, |''v'' − ''v''<sub>approx</sub>| ≤ ''ε''). Crucially, this computation must be achievable within a time duration that is polynomial in terms of the size of the input data and the encoding size of ''ε'' (the latter typically being of the order O(log(1/''ε'')) bits, reflecting the number of bits needed to represent the precision). Analogously, the value ''v'' is considered '''polynomially computable with relative error''' if, for any specified rational number ''η'' > 0 representing the desired maximum permissible relative error, it is possible to compute a rational number ''v''<sub>approx</sub> that approximates ''v'' with a relative error no greater than ''η'' (formally, |(''v'' − ''v''<sub>approx</sub>)/''v''| ≤ ''η'', assuming ''v'' ≠ 0). This computation, similar to the absolute error case, must likewise be achievable in an amount of time that is polynomial in the size of the input data and the encoding size of ''η'' (which is typically O(log(1/''η'')) bits).
It can be demonstrated that if a value ''v'' is polynomially computable with relative error (utilizing an algorithm that we can designate as REL), then it is consequently also polynomially computable with absolute error. ''Proof sketch'': Let ''ε'' > 0 be the target maximum absolute error that we wish to achieve. The procedure commences by invoking the REL algorithm with a chosen relative error bound of, for example, ''η'' = 1/2. This initial step aims to find a rational number approximation ''r''<sub>1</sub> such that the inequality |''v'' − ''r''<sub>1</sub>| ≤ |''v''|/2 holds true. From this relationship, by applying the reverse triangle inequality (|''v''| − |''r''<sub>1</sub>| ≤ |''v'' − ''r''<sub>1</sub>|), we can deduce that |''v''| ≤ 2|''r''<sub>1</sub>| (this holds assuming ''r''<sub>1</sub> ≠ 0; if ''r''<sub>1</sub> = 0, then the relative error condition implies ''v'' must also be 0, in which case the problem of achieving any absolute error ''ε'' > 0 is trivial, as ''v''<sub>approx</sub> = 0 works, and we are done). Given that the REL algorithm operates in polynomial time, the encoding length of the computed ''r''<sub>1</sub> will necessarily be polynomial with respect to the input size. Subsequently, the REL algorithm is invoked a second time, now with a new, typically much smaller, relative error target set to ''η{{'}}'' = ''ε'' / (2|''r''<sub>1</sub>|) (this step also assumes ''r''<sub>1</sub> is non-zero, which we can ensure or handle as a special case). This second application of REL yields another rational number approximation, ''r''<sub>2</sub>, that satisfies the condition |''v'' − ''r''<sub>2</sub>| ≤ ''η{{'}}''|''v''|. Substituting the expression for ''η{{'}}'' gives |''v'' − ''r''<sub>2</sub>| ≤ (''ε'' / (2|''r''<sub>1</sub>|)) |''v''|. Now, using the previously derived inequality |''v''| ≤ 2|''r''<sub>1</sub>|, we can bound the term: |''v'' − ''r''<sub>2</sub>| ≤ (''ε'' / (2|''r''<sub>1</sub>|))
The reverse implication, namely that polynomial computability with absolute error implies polynomial computability with relative error, is generally not true without imposing additional conditions or assumptions. However, a significant special case exists: if one can assume that some positive lower bound ''b'' on the magnitude of ''v'' (i.e., |''v''| > ''b'' > 0) can itself be computed in polynomial time, and if ''v'' is also known to be polynomially computable with absolute error (perhaps via an algorithm designated as ABS), then ''v'' also becomes polynomially computable with relative error. This is because one can simply invoke the ABS algorithm with a carefully chosen target absolute error, specifically ''ε<sub>target</sub>'' = ''ηb'', where ''η'' is the desired relative error. The resulting approximation ''v''<sub>approx</sub> would satisfy |''v'' − ''v''<sub>approx</sub>| ≤ ''ηb''. To see the implication for relative error, we divide by |''v''| (which is non-zero): |(''v'' − ''v''<sub>approx</sub>)/''v''| ≤ (''ηb'')/|''v''|. Since we have the condition |''v''| > ''b'', it follows that ''b''/|''v''| < 1. Therefore, the relative error is bounded by ''η''
An algorithm that, for every given rational number ''η'' > 0, successfully computes a rational number ''v''<sub>approx</sub> that approximates ''v'' with a relative error no greater than ''η'', and critically, does so in a time complexity that is polynomial in both the size of the input and in the reciprocal of the relative error, 1/''η'' (rather than being polynomial merely in log(1/''η''), which typically allows for faster computation when ''η'' is extremely small), is known as a [[Fully Polynomial-Time Approximation Scheme|Fully Polynomial-Time Approximation Scheme (FPTAS)]]. The dependence on 1/''η'' rather than log(1/''η'') is a defining characteristic of FPTAS and distinguishes it from weaker approximation schemes.
|