A root-finding algorithm is a numerical method or algorithm for finding a value x such that f(x) = 0, for a given function f. Here, x is a single real number called the root.
When x is a vector, algorithms to find x such that f(x) = 0 are generally called "equation-solving algorithms". These algorithms are a generalization of root-finding and can operate either on linear or non-linear equations. Some root-finding algorithms (such as Newton's method) can be directly generalized to solve non-linear equations.
Root-finding algorithms are studied in numerical analysis.
Specific algorithms
The simplest root-finding algorithm is the bisection method: we start with two points a and b which bracket a root, and at every iteration, we pick either the subinterval [a, c] or [c, b], where c = (a + b) / 2 is the midpoint between a and b. The algorithm always selects a subinterval which contains a root. The bisection method is guaranteed to converge to a root, however, its progress is rather slow (the rate of convergence is linear).
Newton's method, also called the Newton-Raphson method, linearizes the function f at the current approximation to the root. This yields the recurrence relation
Newton's method may not converge if you start too far away from a root. However, if it does converge, it is faster than the bisection method (convergence is quadratical). Newton's method is also important because it readily generalizes to higher-dimensional problems.
If we replace the derivative in Newton's method with a finite difference, we get the secant method. It is defined by the recurrence relation
So, the secant method does not require the computation of a derivative, but the price is slower convergence (the order is approximately 1.6).
The false position method, also called the regula falsi method, is like the bisection method. However, it does not cut the interval in two equal parts at every iteration, but it cuts the interval at the point given by the formula for the secant method. The false position method inherits the robustness of the bisection method and the superlinear convergence of the secant method.
The secant method also arises if one approximates the unknown function f by linear interpolation. When quadratic interpolation is used instead, one arrives at Muller's method. It converges faster than the secant method. A particular feature of this method is that the iterates xn may become complex. This can be avoided by interpolating the inverse of f, resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.
Finally, Brent's method is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.
Other root-finding algorithms include:
A fast implementation of a square root algorithm using integer shift
By Ricardo Pereira
I have just figured out that the square root of a number is very close to the same number less half the number of bits on it
Looking in the books I saw that a square root has always half the number of digits , then just making a few tests I have observed the truth :
The square root of a number is very close to the same number less half the bits on it
With this information I have constructed this algorithm , here the approximation initiate not at an arbitrary value but exactly at the approximation based on the half bits definition
The second part of the algorithm is the same Babylonian square root method , what make it different is that it initiate already very close to the final square root value , giving a good improvement in speed , and the interations in the approximation loop never will be more than 7 , limiting it to a fixed value no matter what is the value of the number
Without this approximation the number of interations is undefined and varies depending on the number
This new method gives at least 50 percent speed improvement over the original method
Comparing this method with the fsqrt function that is part of the x86 processor this implementation is only 5 times slower than the hardware implementation, and yet I think that there is too much room for improvements on this square root algorithm
//////////////////////////////////////////////////
- include <windows.h>
- include <stdio.h>
- include <math.h>
- include <unistd.h>
- include <limits.h>
- include <time.h>
- undef NDEBUG
- include <assert.h>
- include <string.h>
- include <mmsystem.h>
- include <float.h>
- include <stdlib.h>
double
sqrt2 (double val)
{
double temp; double temp2; double temp3;
/*variable to see whether we have a good appproximation */ double lastval;
if (1 == val) {
return 1;
}
/*initiate at 2 , later it will be adjusted to a more close square root */ temp3 = 2;
/*be sure that lastval is different than val when entering the approximation function */ lastval = -val;
/*no reason to execute the approximation in values less than 4 */ if (4 < val) {
unsigned int p; unsigned int p2; int counter = 0;
p = (int) val;
p2 = p;
while (p != 0) {
/*count the number of bits in the integer portions of the number */
p = p >> 1;
counter++;
}
/*split the number */
counter = ((counter) / 2);
/*remove half the bits on it */ p2 = p2 >> counter;
/*here we have the approximation , the second part of
the code will start already very close to the final square root */
temp3 = p2;
}
while (1) {
/*aproximation based on the code t=(t/n)/2 */ temp2 = val / temp3;
temp3 = (temp2 + temp3) * 0.5;
if (lastval == temp3) { /*if the last approximation is the same then we have found the best possible approximation for the double type precision */ goto achou;
}
lastval = temp3;
}
achou:
return temp2;
}
int main () {
double val = 2;
val = sqrt2 (val);
printf ("%.16f \n", val);
val = 100000000;
val = sqrt2 (val);
printf ("%.16f \n", val);
val = 555555555;
val = sqrt2 (val);
printf ("%.16f \n", val);
val = 999999999;
val = sqrt2 (val);
printf ("%.16f \n", val);
return 0;
} /////////////////////////////////////////////////
For more information about this algorithm , visit: http://rspsoftware.esparta.8x.com.br/squareroot.htm maquisistem@wln.com.br
Finding roots of polynomials
Much attention has been given to the special case that the function f is in fact a polynomial. Of course, the method described in the previous section can be used. In particular, it is easy to find the derivative of a polynomial, so Newton's method is a viable candidate. But one can also choose a method that exploits the fact that f is a polynomial.
One possibility is to form the companion matrix of the polynomial. Since the eigenvalues of this matrix coincide with the roots of the polynomial, one can now use any eigenvalue algorithm to find the roots of the original polynomial.
Another possibility is Laguerre's method, which is a rather complicated method that converges very fast. In fact, it exhibits cubic convergence for simple roots, beating the quadratic convergence displayed by Newton's method.
If the polynomial has rational coefficients and only rational roots are of interest, then Ruffini's method can also be used.
In any case, it should be borne in mind that the problem of finding roots can be ill-conditioned, as the example of Wilkinson's polynomial shows.