Numerical analysis: Difference between revisions

Content deleted Content added
Added a section at the end of major applications of numerical analysis (NA) and added hypelinks to wiki pages. This is a very important field today with the advent of the www and supercomputers and GPU's. This sections has several possible extentions - like theorems of NA which are key since some tell use how to create new or custom NA formulas for a given situation!
Tag: Reverted
Added more links to related wiki pages and about the condition numbers - a key concept of NA.
Tag: Reverted
Line 272:
==Summary of Major Numerical Analysis Applications and Results==
 
[[Root-finding algorithms|Root Finding]]:
 
[[Bisection method|Bisection Method]]: A simple method for finding roots of a continuous function.
Newton-Raphson Method: An iterative technique for finding roots based on derivatives.
 
[[Linear algebra|Linear Algebra]]:
 
[[Gaussian elimination|Gaussian Elimination]]: A method for solving systems of linear equations.
Line 284:
 
[[Lagrange interpolation|Lagrange Interpolation]]: Constructing a polynomial that passes through given data points.
[[Least squares approximation|Least Squares Approximation]]: Minimizing the sum of the squares of the residuals for a set of data points.
 
[[Numerical integration|Numerical Integration]]:
 
[[Trapezoidal rule|Trapezoidal Rule]], [[Simpson's rule|Simpson's Rule]]: Approximating definite integrals using simple geometric shapes.
Line 291 ⟶ 292:
Differential Equations:
 
[[Euler method|Euler's Method:]] A simple numerical method for solving ordinary differential equations.
Runge-Kutta Methods: Higher-order methods for solving differential equations with improved accuracy.
Optimization:
 
[[Gradient descent|Gradient Descent]]: An iterative [[Mathematical optimization|optimization]] algorithm for finding the minimum of a function. This is a fundamental result of optimization with many applications.
 
[[Conjugate gradient method|Conjugate Gradient Method]]: An iterative method for solving systems of linear equations and optimization problems.
 
[[Eigenvalues and eigenvectors|Eigenvalue]] Problems:
 
[[Power iteration|Power Iteration]]: An iterative method for finding the dominant eigenvalue and corresponding eigenvector.
 
[[QR decomposition|QR Factorization]]:
Line 307 ⟶ 309:
[[Numerical stability|Numerical Stability]]:
 
Conditioning: Assessing how sensitive a problem is to changes in input data. See [[Condition number|Condition Number]] which can measure how unstable a matrix is.
 
Stability of Algorithms: Evaluating how errors propagate in numerical methods.
Finite Difference Methods:
 
[[Explicit and implicit methods|Explicit and Implicit Methods]]: Techniques for solving partial differential equations numerically.
[[Crank-Nicolson method|Crank-Nicolson Method]]: A compromise between explicit and implicit methods for better stability.
[[Monte Carlo methods in finance|Monte Carlo Methods]]: