In the mathematical sub-field of numerical analysis the approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because
- the measurement of the data is not precise (due to the instruments), or
- approximations are used instead of the real data (e.g., 3.14 instead of π).
One commonly distinguishes between the relative error and the absolute error.
The numerical stability of an algorithm in numerical analysis indicates how the error is propagated by the algorithm.
Definitions
Given some value a and an approximation b of a, the absolute error is
the relative error is
if and the percent error is
where the vertical bars denote the absolute value, a represents the true value, and b represents the approximation to a.
Linear Algebra Definition
Where is the p-norm of vertex a, and is an approximation of vertex x where .
- is the number of Significant figures of the largest magnitude entry of x.[1]
Propagation of errors
When calculating using approximate values it is important to be able to calculate the errors involved.
For measured values X & Y with absolute errors & and relative errors & respectively, we can use:
- For :
- For ::
- For :
- For :
References
- ^ Golub, Gene (1996). Matrix Computations - Third Edition. Baltimore: The Johns Hopkins University Press. p. 53. ISBN 0-8018-5413-X.
{{cite book}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help)