===Numerical stability and well-posed problems===
[[Numerical stability]] is a notion in numerical analysis. An algorithm is called ''[[numerically stable]]'' if an error, whatever its cause, does not grow to be much larger during the calculation.<ref name="stab">{{harvnb|Higham|2002}}</ref> This happens if the problem is ''[[condition number|well-conditioned]]'', meaning that the solution changes by only a small amount if the problem data are changed by a small amount.<ref name="stab"/> To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error.<ref name="stab"/>
Both the original problem and the algorithm used to solve that problem can be 'well-conditioned ' or 'ill-conditioned ', and any combination is possible. ▼
So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem.
▲Both the original problem and the algorithm used to solve that problem can be 'well-conditioned' or 'ill-conditioned', and any combination is possible.
So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem. For instance, computing the square root of 2 (which is roughly 1.41421) is a well-posed problem. Many algorithms solve this problem by starting with an initial approximation ''x''<sub>0</sub> to <math>\sqrt{2}</math>, for instance ''x''<sub>0</sub> = 1.4, and then computing improved guesses ''x''<sub>1</sub>, ''x''<sub>2</sub>, etc. One such method is the famous [[Babylonian method]], which is given by ''x''<sub>''k''+1</sub> = ''x<sub>k</sub>''/2 + 1/''x<sub>k</sub>''. Another method, called 'method X', is given by ''x''<sub>''k''+1</sub> = (''x''<sub>''k''</sub><sup>2</sup> − 2)<sup>2</sup> + ''x''<sub>''k''</sub>.{{NoteTag|This is a [[fixed point iteration]] for the equation <math>x=(x^2-2)^2+x=f(x)</math>, whose solutions include <math>\sqrt{2}</math>. The iterates always move to the right since <math>f(x)\geq x</math>. Hence <math>x_1=1.4<\sqrt{2}</math> converges and <math>x_1=1.42>\sqrt{2}</math> diverges.}} A few iterations of each scheme are calculated in table form below, with initial guesses ''x''<sub>0</sub> = 1.4 and ''x''<sub>0</sub> = 1.42.
{| class="wikitable"
|-
! Babylonian
! Babylonian
! Method X
! Method X
|-
| ''x''<sub>0</sub> = 1.4
| ''x''<sub>0</sub> = 1.42
| ''x''<sub>0</sub> = 1.4
| ''x''<sub>0</sub> = 1.42
|-
| ''x''<sub>1</sub> = 1.4142857...
| ''x''<sub>1</sub> = 1.41422535...
| ''x''<sub>1</sub> = 1.4016
| ''x''<sub>1</sub> = 1.42026896
|-
| ''x''<sub>2</sub> = 1.414213564...
| ''x''<sub>2</sub> = 1.41421356242...
| ''x''<sub>2</sub> = 1.4028614...
| ''x''<sub>2</sub> = 1.42056...
|-
|
|
| ...
| ...
|-
|
|
| ''x''<sub>1000000</sub> = 1.41421...
| ''x''<sub>27</sub> = 7280.2284...
|}
Observe that the Babylonian method converges quickly regardless of the initial guess, whereas Method X converges extremely slowly with initial guess ''x''<sub>0</sub> = 1.4 and diverges for initial guess ''x''<sub>0</sub> = 1.42. Hence, the Babylonian method is numerically stable, while Method X is numerically unstable.
:'''Numerical stability''' is affected by the number of the significant digits the machine keeps. If a machine is used that keeps only the four most significant decimal digits, a good example on loss of significance can be given by the two equivalent functions
:<math>
f(x)=x\left(\sqrt{x+1}-\sqrt{x}\right)
</math> and <math>
g(x)=\frac{x}{\sqrt{x+1}+\sqrt{x}}.
</math>
:Comparing the results of
:: <math> f(500)=500 \left(\sqrt{501}-\sqrt{500} \right)=500 \left(22.38-22.36 \right)=500(0.02)=10</math>
:and
:<math>
\begin{alignat}{3}g(500)&=\frac{500}{\sqrt{501}+\sqrt{500}}\\
&=\frac{500}{22.38+22.36}\\
&=\frac{500}{44.74}=11.17
\end{alignat}
</math>
: by comparing the two results above, it is clear that [[loss of significance]] (caused here by [[catastrophic cancellation]] from subtracting approximations to the nearby numbers <math>\sqrt{501}</math> and <math>\sqrt{500}</math>, despite the subtraction being computed exactly) has a huge effect on the results, even though both functions are equivalent, as shown below
:: <math> \begin{alignat}{4}
f(x)&=x \left(\sqrt{x+1}-\sqrt{x} \right)\\
&=x \left(\sqrt{x+1}-\sqrt{x} \right)\frac{\sqrt{x+1}+\sqrt{x}}{\sqrt{x+1}+\sqrt{x}}\\
&=x\frac{(\sqrt{x+1})^2-(\sqrt{x})^2}{\sqrt{x+1}+\sqrt{x}}\\
&=x\frac{x+1-x}{\sqrt{x+1}+\sqrt{x}} \\
&=x\frac{1}{\sqrt{x+1}+\sqrt{x}} \\
&=\frac {x}{\sqrt{x+1}+\sqrt{x}} \\
&=g(x)
\end{alignat}</math>
: The desired value, computed using infinite precision, is 11.174755...{{NoteTag|The example is a modification of one taken from {{harvtxt|Mathews|Fink|1999}}.<ref>{{cite book|page=28|contribution=Example 1.17|title=Numerical Methods Using MATLAB|edition=3rd|first1=John H.|last1=Mathews|first2=Kurtis D.|last2=Fink|publisher=Prentice Hall|year=1999}}</ref>}}
==Areas of study==
|