Jenkins–Traub algorithm: Difference between revisions

Content deleted Content added
No edit summary
minor stuff
Line 6:
 
with complex coefficients compute approximations to the <math> n</math> zeros <math>\alpha_1,\alpha_1,\dots,\alpha_n</math> of <math>P(z)</math>.
There is a variation of the Jenkins-Traub algorithm which is faster if the coefficients are real. The Jenkins-Traub algorithm has stimulated considerablyconsiderable research on theory and software for methods of this type.
 
==Overview==
The Jenkins-traub algorithm is a three-stage process for calculating the zeros of a polynomial with complexicomplex coefficents. See Jenkins and Traub [http://www.springerlink.com/content/q6w17w30035r2152/?p=ae17d723839045be82d270b45363625f&pi=1 A Three-Stage Variable-Shift Iteration for Polynomial Zeros and Its Relation to Generalized Rayleigh Iteration].<ref>Jenkins, M. A. and Traub, J. F. (19681970), [http://www.springerlink.com/content/q6w17w30035r2152/?p=ae17d723839045be82d270b45363625f&pi=1 A Three-Stage Variables-Shift Iteration for Polynomial Zeros and Its Relation to Generalized Rayleigh Iteration], Numer. Math. 14, 252-263.</ref>
A description can also be found in Ralston and Rabinowitz<ref>Ralston, A. and Rabinowitz, P. (1978), A First Course in Numerical Analysis, 2nd ed., McGraw-Hill, New York.</ref> p. 383.
The algorithm is similar in spirit to the two-stage algorithm studied by Traub; [http://links.jstor.org/sici?sici=0025-5718(196601)20%3A93%3C113%3AACOGCI%3E2.0.CO%3B2-3 A class of Globally Convergent Iteration Functions for the Solution of Polynomial Equations].<ref>Traub, J. F. (1966), [http://links.jstor.org/sici?sici=0025-5718(196601)20%3A93%3C113%3AACOGCI%3E2.0.CO%3B2-3 A Class of Globally Convergent Iteration Functions for the Solution of Polynomial Equations], Math. Comp., 20(93), 113-138.</ref> A description may be bound in Ralston and Rabinowitz<ref>Ralston, A. and Rabinowitz, P. (1978), A First Course in Numerical Analysis, 2nd ed., McGraw-Hill, New York.</ref> p. 383.
 
 
Line 18 ⟶ 20:
A sequence of polynomials <math>H^{(\lambda+1)}(z)</math> is generated, <math>\lambda=0,1,\dots,L-1</math>.
*Stage Three: Variable-Shift Process.
The <math>H^{(\lambda)}(z)</math> are now generated using the variable shiftshifts <math>s_{\lambda}</math> which are generated by
 
::::<math>s_{\lambda+1}=s_\lambda- \frac{P(s_\lambda)}{\bar H^{(\lambda+1)}(s_\lambda)}, \quad \lambda=L,L+1,\dots,</math>
Line 24 ⟶ 26:
where <math>\bar H^{(\lambda+1)}(z)</math> is <math>H^{(\lambda)}(z)</math> divided by its leading coefficient.
 
It can be shown that provided <math>L</math> is chosen sufficiently large <math>s_{\lambda}</math> always converges to a zero of <math>P</math>. After an approximate zero has been found the degree of <math>P</math> is reduced by one by deflation and the algorithm is performed on the new polynomial until all the zeros have been computed.
 
The algorithm converges for any distribution of zeros. Furthermore, the convergence is faster than the quadratic convergence of Newton-Raphson iteration.
 
==What Gives the Algorithm its Power?==
What gives the jenkinsJenkins-Traub algorithm its power? Lets compare with Newton-Raphson iteration
 
::::<math>z_{i+1}=z_i - \frac{P(z_i)}{P^{\prime}(z_i)}.</math>
Line 37 ⟶ 39:
::::<math>s_{\lambda+1}=s_\lambda- \frac{P(s_\lambda)}{\bar H^{(\lambda+1)}(s_\lambda)}</math>
 
is precisely a Newton-Raphson iteration performed on certain rational functions. More precisely, Newton-RaphonRaphson is being performed on a sequence of rational functions <math>P(z)/H^{(\lambda)}</math>. For <math>\lambda</math> sufficiently large, <math>P(z)/H^{(\lambda)}</math> is as close as desired to a first degree polynomial <math>z-\alpha_1</math>, where <math>\alpha_1</math> is one of the zeros of <math>P</math>. Even though Stage 3 is precisely a Newton-Raphson iteration differentiation is not performed.
 
==Real Coefficients==
The Jenkins-Traub algorithm described in the previous Sectionearlier works for polynomials with complex coefficients. The same authors also created a three-stage algorithm for polynomials with real coefficients. See Jenkins and Traub [http://links.jstor.org/sici?sici=0036-1429%28197012%297%3A4%3C545%3AATAFRP%3E2.0.CO%3B2-J A Three-Stage Algorithm for Real Polynomials Using Quadratic Iteration].<ref>Jenkins, M. A. and Traub, J. F. (1970), [http://links.jstor.org/sici?sici=0036-1429%28197012%297%3A4%3C545%3AATAFRP%3E2.0.CO%3B2-J A Three-Stage Algorithm for Real Polynomials Using Quadratic Iteration], SIAM J. Numer. Anal., 7(4), 545-566.</ref> The algorithm finds either a linear or quadratic factor working completely in real arithmetic. If the complex and the real algorithms are applied to the same real polynomial, the real algorithm is about four times as fast. The real algorithm always converges and the rate of convergence is greater than second order.
 
==A Connection with the Shifted QR Algorithm==
There is a surprising connection with the shifted [[QR algorithm]] for computing matrix eigenvalues. See Dekker and Traub [http://www.sciencedirect.com/science?_ob=ArticleListURL&_method=list&_ArticleListID=596538089&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=6062eb1bdb37e355f732a580c6faee1a The shifted QR algorithm for Hermitian matrices].<ref>Dekker, T. J. and Traub, J. F. (1971), [http://www.sciencedirect.com/science?_ob=ArticleListURL&_method=list&_ArticleListID=596538089&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=6062eb1bdb37e355f732a580c6faee1a The shifted QR algorithm for Hermitian matrices], Lin. Algebra Appl., 4(2), 137-154.</ref> Again the shifts may be viewed as Newton-Raphson iteration on a sequence of rational functions converging to a first degree polynomial.
 
==Software and Testing==
Line 50 ⟶ 52:
The methods have been extensively tested by many people. As predicted they enjoy faster than quadratic convergence for all distributions of zeros. They have been described as ''practically a standard in black-box polynomial root finders''; see Press, et al., Numerical Recipes,<ref>Press, W. H., Teukolsky, S. A., Vetterling, W. T. and Flannery, B. P. (2002), Numerical Recipes in C++: The Art of Scientific Computing, 2nd. ed., Cambridge University Press, New York.</ref> p. 380.
 
However there are polynomials which can cause loss of precision as illustrated by the following example. The polynomial has all its zeros lying on two half-circles of different radii. Wilkinson recommends that it is desirable for stable deflation that smaller zeros be computed first. The second-stage shifts are chosen so that the zeros on the smaller half circle are found first. After deflation the polynomial with the zeros on on the half circle is known to be ill-conditioned if the degree is large; see Wilkinson,<ref>Wilkinson, J. H. (1963), Rounding Errors in Algebraic Processes, Prentice Hall, Englewood Cliffs, N.J.</ref> p. 64. The original polynomial was of degree 60 and suffered severe deflation instability.
However there are polynomials which can cause loss of precision as illustrated by the following example.
 
The polynomial has all its zeros lying on two half-circles of different radii. Wilkinson recommends that it is desirable for stable deflation that smaller zeros be computed first. The second-stage shifts are chosen so that the zeros on the smaller half circle are found first. After deflation the polynomial with the zeros on on the half circle is known to be ill-conditioned if the degree is large; see Wilkinson,<ref>Wilkinson, J. H. (1963), Rounding Errors in Algebraic Processes, Prentice Hall, Englewood Cliffs, N.J.</ref> p. 64. The original polynomial was of degree 60 and suffered severe deflation instability.
 
==References==