Approximation error

This is an old revision of this page, as edited by Oleg Alexandrov (talk | contribs) at 22:14, 3 December 2006 (rm unhelpful links). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In the mathematical subfield of numerical analysis the approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because

  1. the measurement of the data is not precise (due to the instruments), or
  2. approximations are used instead of the real data (e.g., 3.14 instead of π).

One commonly distinguishes between the relative error and the absolute error.

The numerical stability of an algorithm in numerical analysis indicates how the error is propagated by the algorithm.

Definitions

Given some value a and an approximation b of a, the absolute error is

 

the relative error is

 

and the percent error is

 

where the vertical bars denote the absolute value, a represents the true value, and b represents the approximation to a.

Propagation of errors

When calculating using approximate values it is important to be able to calculate the errors involved.

For measured values X & Y with absolute errors   &   and relative errors   &   respectively, we can use:

  • For  :
 
  • For  ::
 
  • For  :
 
  • For  :