Content deleted Content added
m clean up spacing around commas and other punctuation fixes, replaced: ,b → , b |
Owen Reich (talk | contribs) Link suggestions feature: 3 links added. |
||
(11 intermediate revisions by 9 users not shown) | |||
Line 1:
{{Short description|Mathematical concept}}
{{redirect-distinguish|Absolute error|Absolute deviation}}
{{redirect-distinguish|Relative error|Relative change}}
{{broader|Approximation}}
[[File:E^x with linear approximation.png|thumb|Graph of <math>f(x) = e^x</math> (blue) with its linear approximation <math>P_1(x) = 1 + x</math> (red) at a = 0. The approximation error,
The '''approximation error''' in a given data value
An approximation error can
In the [[mathematics|mathematical]] field of [[numerical analysis]], the crucial concept of [[numerical stability]]
==Formal definition==
:<math>|v-v_\text{approx}| \leq \varepsilon</math>
▲:<math>\epsilon = |v-v_\text{approx}|\ ,</math> <ref>{{Cite web |last=Weisstein |first=Eric W. |title=Absolute Error |url=https://mathworld.wolfram.com/ |access-date=2023-06-11 |website=mathworld.wolfram.com |language=en}}</ref><ref name=":0">{{Cite web |title=Absolute and Relative Error {{!}} Calculus II |url=https://courses.lumenlearning.com/calculus2/chapter/absolute-and-relative-error/ |access-date=2023-06-11 |website=courses.lumenlearning.com}}</ref>
where the vertical bars, | |, unambiguously denote the [[absolute value]] of the difference between the true value ''v'' and its approximation ''v''<sub>approx</sub>. This mathematical operation signifies the magnitude of the error, irrespective of whether the approximation is an overestimate or an underestimate.
Similarly, we state that ''v''<sub>approx</sub> approximates the value ''v'' where the magnitude of the '''relative error''' is bounded by a positive value ''η'' (i.e., ''η''>0), provided ''v'' is not zero (''v'' ≠ 0), if the subsequent inequality is satisfied:<blockquote><math>|v-v_\text{approx}| \leq \eta\cdot |v|</math>.</blockquote>This definition ensures that ''η'' acts as an upper bound on the ratio of the absolute error to the magnitude of the true value. If ''v'' ≠ 0, then the actual '''relative error''', often also denoted by ''η'' in context (representing the calculated value rather than a bound), is precisely calculated as:
:<math> \eta = \frac{|v-v_\
= \left| \frac{v-v_\text{approx}}{v} \right|
= \left| 1 - \frac{v_\text{approx}}{v} \right|
</math>.
Note that the first term in the equation above implicitly defines `ε` as `|v-v_approx|` if `η` is `ε/|v|`.
:<math>\delta = 100\%\times\eta = 100\%\times\left| \frac{v-v_\text{approx}}{v} \right|.</math>
An '''error bound'''
===Generalizations===▼
{{Expand section|date=April 2023}}▼
</ref>▼
==Examples==
{{Diophantine_approximation_graph.svg}}
The utility of relative error
There are two crucial features or caveats associated with the interpretation and application of relative error that should always be kept in mind.
== Comparison ==
When comparing the behavior and intrinsic characteristics of these two fundamental error types, it is important to recognize their differing sensitivities to common arithmetic operations. Specifically, statements and conclusions made about ''relative errors'' are notably sensitive to the addition of a non-zero constant to the underlying true and approximated values, as such an addition alters the base value against which the error is relativized, thereby changing the ratio. However, relative errors remain unaffected by the multiplication of both the true and approximated values by the same non-zero constant, because this constant would appear in both the numerator (of the absolute error) and the denominator (the true value) of the relative error calculation, and would consequently cancel out, leaving the relative error unchanged. Conversely, for ''absolute errors'', the opposite relationship holds true: absolute errors are directly sensitive to the multiplication of the underlying values by a constant (as this scales the magnitude of the difference itself), but they are largely insensitive to the addition of a constant to these values (since adding the same constant to both the true value and its approximation does not change the difference between them: (''v''+c) − (''v''<sub>approx</sub>+c) = ''v'' − ''v''<sub>approx</sub>).<ref name=":02">{{Cite Geometric Algorithms and Combinatorial Optimization}}</ref>{{Rp|page=34}}
== Polynomial-time approximation of real numbers ==
In the realm of [[computational complexity theory]], we define that a real value ''v'' is '''polynomially computable with absolute error''' from a given input if, for any specified rational number ''ε'' > 0 representing the desired maximum permissible absolute error, it is algorithmically possible to compute a rational number ''v''<sub>approx</sub> such that ''v''<sub>approx</sub> approximates ''v'' with an absolute error no greater than ''ε'' (formally, |''v'' − ''v''<sub>approx</sub>| ≤ ''ε''). Crucially, this computation must be achievable within a time duration that is polynomial in terms of the size of the input data and the encoding size of ''ε'' (the latter typically being of the order O(log(1/''ε'')) bits, reflecting the number of bits needed to represent the precision). Analogously, the value ''v'' is considered '''polynomially computable with relative error''' if, for any specified rational number ''η'' > 0 representing the desired maximum permissible relative error, it is possible to compute a rational number ''v''<sub>approx</sub> that approximates ''v'' with a relative error no greater than ''η'' (formally, |(''v'' − ''v''<sub>approx</sub>)/''v''| ≤ ''η'', assuming ''v'' ≠ 0). This computation, similar to the absolute error case, must likewise be achievable in an amount of time that is polynomial in the size of the input data and the encoding size of ''η'' (which is typically O(log(1/''η'')) bits).
It can be demonstrated that if a value ''v'' is polynomially computable with relative error (utilizing an algorithm that we can designate as REL), then it is consequently also polynomially computable with absolute error. ''Proof sketch'': Let ''ε'' > 0 be the target maximum absolute error that we wish to achieve. The procedure commences by invoking the REL algorithm with a chosen relative error bound of, for example, ''η'' = 1/2. This initial step aims to find a [[rational number]] approximation ''r''<sub>1</sub> such that the inequality |''v'' − ''r''<sub>1</sub>| ≤ |''v''|/2 holds true. From this relationship, by applying the reverse triangle inequality (|''v''| − |''r''<sub>1</sub>| ≤ |''v'' − ''r''<sub>1</sub>|), we can deduce that |''v''| ≤ 2|''r''<sub>1</sub>| (this holds assuming ''r''<sub>1</sub> ≠ 0; if ''r''<sub>1</sub> = 0, then the relative error condition implies ''v'' must also be 0, in which case the problem of achieving any absolute error ''ε'' > 0 is trivial, as ''v''<sub>approx</sub> = 0 works, and we are done). Given that the REL algorithm operates in polynomial time, the encoding length of the computed ''r''<sub>1</sub> will necessarily be polynomial with respect to the input size. Subsequently, the REL algorithm is invoked a second time, now with a new, typically much smaller, relative error target set to ''η{{'}}'' = ''ε'' / (2|''r''<sub>1</sub>|) (this step also assumes ''r''<sub>1</sub> is non-zero, which we can ensure or handle as a special case). This second application of REL yields another rational number approximation, ''r''<sub>2</sub>, that satisfies the condition |''v'' − ''r''<sub>2</sub>| ≤ ''η{{'}}''|''v''|. Substituting the expression for ''η{{'}}'' gives |''v'' − ''r''<sub>2</sub>| ≤ (''ε'' / (2|''r''<sub>1</sub>|)) |''v''|. Now, using the previously derived inequality |''v''| ≤ 2|''r''<sub>1</sub>|, we can bound the term: |''v'' − ''r''<sub>2</sub>| ≤ (''ε'' / (2|''r''<sub>1</sub>|)) × (2|''r''<sub>1</sub>|) = ''ε''. Thus, the approximation ''r''<sub>2</sub> successfully approximates ''v'' with the desired absolute error ''ε'', demonstrating that polynomial computability with relative error implies polynomial computability with absolute error.<ref name=":02" />{{Rp|page=34}}
The reverse implication, namely that polynomial computability with absolute error implies polynomial computability with relative error, is generally not true without imposing additional conditions or assumptions. However, a significant special case exists: if one can assume that some positive lower bound ''b'' on the magnitude of ''v'' (i.e., |''v''| > ''b'' > 0) can itself be computed in polynomial time, and if ''v'' is also known to be polynomially computable with absolute error (perhaps via an algorithm designated as ABS), then ''v'' also becomes polynomially computable with relative error. This is because one can simply invoke the ABS algorithm with a carefully chosen target absolute error, specifically ''ε<sub>target</sub>'' = ''ηb'', where ''η'' is the desired relative error. The resulting approximation ''v''<sub>approx</sub> would satisfy |''v'' − ''v''<sub>approx</sub>| ≤ ''ηb''. To see the implication for relative error, we divide by |''v''| (which is non-zero): |(''v'' − ''v''<sub>approx</sub>)/''v''| ≤ (''ηb'')/|''v''|. Since we have the condition |''v''| > ''b'', it follows that ''b''/|''v''| < 1. Therefore, the relative error is bounded by ''η'' × (''b''/|''v''|) < ''η'' × 1 = ''η'', which is the desired outcome for polynomial computability with relative error.
An algorithm that, for every given rational number ''η'' > 0, successfully computes a rational number ''v''<sub>approx</sub> that approximates ''v'' with a relative error no greater than ''η'', and critically, does so in a time complexity that is polynomial in both the size of the input and in the reciprocal of the relative error, 1/''η'' (rather than being polynomial merely in log(1/''η''), which typically allows for faster computation when ''η'' is extremely small), is known as a [[Fully Polynomial-Time Approximation Scheme|Fully Polynomial-Time Approximation Scheme (FPTAS)]]. The dependence on 1/''η'' rather than log(1/''η'') is a defining characteristic of FPTAS and distinguishes it from weaker approximation schemes.
==Instruments==
In the context of most indicating measurement instruments, such as analog or digital voltmeters, pressure gauges, and thermometers, the specified accuracy is frequently guaranteed
▲{{Expand section|date=April 2023}}
The fundamental definitions of absolute and relative error, as presented primarily for scalar (one-dimensional) values, can be naturally and rigorously extended to more complex scenarios where the quantity of interest <math>v</math> and its corresponding approximation <math>v_{\text{approx}}</math> are [[Euclidean vector|''n''-dimensional vectors]], matrices, or, more generally, elements of a [[normed vector space]]. This important generalization is typically achieved by systematically replacing the [[absolute value]] function (which effectively measures magnitude or "size" for scalar numbers) with an appropriate [[norm (mathematics)|vector ''n''-norm]] or [[matrix norm]]. Common examples of such norms include the L<sub>1</sub> norm (sum of absolute component values), the L<sub>2</sub> norm (Euclidean norm, or square root of the sum of squared components), and the L<sub>∞</sub> norm (maximum absolute component value). These norms provide a way to quantify the "distance" or "difference" between the true vector (or matrix) and its approximation in a multi-dimensional space, thereby allowing for analogous definitions of absolute and relative error in these higher-dimensional contexts.<ref name="GOLUB_MAT_COMP2.2.3">{{cite book |last=Golub |first=Gene |title=Matrix Computations |edition=Third |author2=Charles F. Van Loan |publisher=The Johns Hopkins University Press |year=1996 |isbn=0-8018-5413-X |___location=Baltimore |pages=53 |author-link=Gene H. Golub}}
▲</ref>
==See also==
|