Numerical analysis: Difference between revisions

Content deleted Content added
Added more links to related wiki pages and about the condition numbers - a key concept of NA.
Tag: Reverted
Line 2:
{{Use dmy dates|date=October 2020}}
[[Image:Ybc7289-bw.jpg|thumb|250px|right|Babylonian clay tablet [[YBC 7289]] (c. 1800–1600 BC) with annotations. The approximation of the [[square root of 2]] is four [[sexagesimal]] figures, which is about six [[decimal]] figures. 1 + 24/60 + 51/60<sup>2</sup> + 10/60<sup>3</sup> = 1.41421296...<ref>{{Cite web |url=http://it.stlawu.edu/%7Edmelvill/mesomath/tablets/YBC7289.html |title=Photograph, illustration, and description of the ''root(2)'' tablet from the Yale Babylonian Collection |access-date=2 October 2006 |archive-date=13 August 2012 |archive-url=https://web.archive.org/web/20120813054036/http://it.stlawu.edu/%7Edmelvill/mesomath/tablets/YBC7289.html |url-status=dead }}</ref>]]
'''Numerical analysis''' is the study of [[algorithm]]s that use numerical [[approximation]] (as opposed to [[symbolic computation|symbolic manipulations]]) for the problems of [[mathematical analysis]] (as distinguished from [[discrete mathematics]]). It is the study of numerical methods that attempt at finding approximate solutions (to any level of accuracy) of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences, medicine, business and even the arts. Current growth in computing (such as [[GPU]]'s) power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: [[ordinary differential equation]]s as found in [[celestial mechanics]] (predicting the motions of planets, stars and galaxies), [[numerical linear algebra]] in data analysis,<ref>{{cite book |first=J.W. |last=Demmel |title=Applied numerical linear algebra |publisher=[[Society for Industrial and Applied Mathematics|SIAM]] |date=1997 |isbn=978-1-61197-144-6 |doi=10.1137/1.9781611971446 |url=https://epubs.siam.org/doi/epdf/10.1137/1.9781611971446.fm}}</ref><ref>{{cite book |last1=Ciarlet |first1=P.G. |last2=Miara |first2=B. |last3=Thomas |first3=J.M. |title=Introduction to numerical linear algebra and optimization |publisher=Cambridge University Press |date=1989 |isbn=9780521327886 |oclc=877155729 }}</ref><ref>{{cite book |last1=Trefethen |first1=Lloyd |last2=Bau III |first2=David |title=Numerical Linear Algebra |publisher=SIAM |date=1997 |isbn=978-0-89871-361-9 |url={{GBurl|4Mou5YpRD_kC|pg=PR7}}}}</ref> and [[stochastic differential equation]]s and [[Markov chain]]s for simulating living cells in medicine and biology.
 
Before modern computers, [[numerical method]]s often relied on hand [[interpolation]] formulas, using data from large printed tables. Since the mid 20th century, computers calculate the required [[Function (mathematics)|functions]] instead, but many of the same formulas continue to be used in software algorithms.<ref name="20c">{{cite book |last1=Brezinski |first1=C. |last2=Wuytack |first2=L. |title=Numerical analysis: Historical developments in the 20th century |publisher=Elsevier |date=2012 |isbn=978-0-444-59858-5 |url={{GBurl|dt3Z1yu2VxwC|pg=PP6}}}}</ref>
Examples of numerical analysis include: [[ordinary differential equation]]s as found in [[celestial mechanics]] (predicting the motions of planets, stars and galaxies), [[numerical linear algebra]] in data analysis,<ref>{{cite book |first=J.W. |last=Demmel |title=Applied numerical linear algebra |publisher=[[Society for Industrial and Applied Mathematics|SIAM]] |date=1997 |isbn=978-1-61197-144-6 |doi=10.1137/1.9781611971446 |url=https://epubs.siam.org/doi/epdf/10.1137/1.9781611971446.fm}}</ref><ref>{{cite book |last1=Ciarlet |first1=P.G. |last2=Miara |first2=B. |last3=Thomas |first3=J.M. |title=Introduction to numerical linear algebra and optimization |publisher=Cambridge University Press |date=1989 |isbn=9780521327886 |oclc=877155729 }}</ref><ref>{{cite book |last1=Trefethen |first1=Lloyd |last2=Bau III |first2=David |title=Numerical Linear Algebra |publisher=SIAM |date=1997 |isbn=978-0-89871-361-9 |url={{GBurl|4Mou5YpRD_kC|pg=PR7}}}}</ref> and [[stochastic differential equation]]s and [[Markov chain]]s for simulating living cells in medicine and biology.
 
History:
 
Before modern computers, [[numerical method]]s often relied on hand [[interpolation]] formulas, using data from large printed tables. Since the mid 20th century, computers calculate the required [[Function (mathematics)|functions]] instead, but many of the same formulas continue to be used in software algorithms.<ref name="20c">{{cite book |last1=Brezinski |first1=C. |last2=Wuytack |first2=L. |title=Numerical analysis: Historical developments in the 20th century |publisher=Elsevier |date=2012 |isbn=978-0-444-59858-5 |url={{GBurl|dt3Z1yu2VxwC|pg=PP6}}}}</ref>
 
The numerical point of view goes back to the earliest mathematical writings. A tablet from the [[Yale Babylonian Collection]] ([[YBC 7289]]), gives a [[sexagesimal]] numerical approximation of the [[square root of 2]], the length of the [[diagonal]] in a [[unit square]].
 
Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions of arbitrary accuracy - within specified error bounds are used.
 
==General introduction==
Line 269 ⟶ 265:
Also, any [[spreadsheet]] [[software]] can be used to solve simple problems relating to numerical analysis.
[[Microsoft_Excel#|Excel]], for example, has hundreds of [[Microsoft Excel#Functions|available functions]], including for matrices, which may be used in conjunction with its [[Microsoft Excel#Add-ins|built in "solver"]].
 
==Summary of Major Numerical Analysis Applications and Results==
 
[[Root-finding algorithms|Root Finding]]:
 
[[Bisection method|Bisection Method]]: A simple method for finding roots of a continuous function.
Newton-Raphson Method: An iterative technique for finding roots based on derivatives.
 
[[Linear algebra|Linear Algebra]]:
 
[[Gaussian elimination|Gaussian Elimination]]: A method for solving systems of linear equations.
LU Decomposition: Decomposing a matrix into lower and upper triangular matrices for efficient solving.
Interpolation and Approximation:
 
[[Lagrange interpolation|Lagrange Interpolation]]: Constructing a polynomial that passes through given data points.
[[Least squares approximation|Least Squares Approximation]]: Minimizing the sum of the squares of the residuals for a set of data points.
 
[[Numerical integration|Numerical Integration]]:
 
[[Trapezoidal rule|Trapezoidal Rule]], [[Simpson's rule|Simpson's Rule]]: Approximating definite integrals using simple geometric shapes.
Gaussian Quadrature: Using weighted sum of function values at specific points for accurate integration.
Differential Equations:
 
[[Euler method|Euler's Method:]] A simple numerical method for solving ordinary differential equations.
Runge-Kutta Methods: Higher-order methods for solving differential equations with improved accuracy.
Optimization:
 
[[Gradient descent|Gradient Descent]]: An iterative [[Mathematical optimization|optimization]] algorithm for finding the minimum of a function. This is a fundamental result of optimization with many applications.
 
[[Conjugate gradient method|Conjugate Gradient Method]]: An iterative method for solving systems of linear equations and optimization problems.
 
[[Eigenvalues and eigenvectors|Eigenvalue]] Problems:
 
[[Power iteration|Power Iteration]]: An iterative method for finding the dominant eigenvalue and corresponding eigenvector.
 
[[QR decomposition|QR Factorization]]:
Decomposing a matrix into the product of an orthogonal matrix and an upper triangular matrix to find eigenvalues.
 
[[Numerical stability|Numerical Stability]]:
 
Conditioning: Assessing how sensitive a problem is to changes in input data. See [[Condition number|Condition Number]] which can measure how unstable a matrix is.
 
Stability of Algorithms: Evaluating how errors propagate in numerical methods.
Finite Difference Methods:
 
[[Explicit and implicit methods|Explicit and Implicit Methods]]: Techniques for solving partial differential equations numerically.
[[Crank-Nicolson method|Crank-Nicolson Method]]: A compromise between explicit and implicit methods for better stability.
[[Monte Carlo methods in finance|Monte Carlo Methods]]:
 
[[Random Sampling]]: Utilizing random sampling to estimate numerical results.
Markov Chain Monte Carlo (MCMC): A class of algorithms for sampling from probability distributions.
Fast Fourier Transform (FFT):
 
Efficient Algorithm: Rapid computation of the discrete Fourier transform, essential in signal processing and numerical simulations.
 
Sparse Matrix Techniques:
 
[[Iterative solver|Iterative Solvers]]: Methods tailored for efficient solutions of large sparse linear systems.
 
 
==See also==