Numerical error: Difference between revisions

Content deleted Content added
Added reason for use of the term "truncation".
mNo edit summary
Line 1:
In [[software engineering]] and [[mathematics]], '''numerical error''' is the combined effect of two kinds of error in a calculation. The first is caused by the finite precision of computations involving [[floating-point]] or integer values. The second (sometimesusually called the ''theoretical truncation error'') is the difference between the exact mathematical solution and the approximate solution obtained when simplifications are made to the mathematical equations to make them more amenable to calculation. The term truncation comes from the fact that these simplifications usually involve the truncation of an infinite series expansion so as to make the computation possible and practical.
 
Floating-point numerical error is often measured in ULP ([[unit in the last place]]).