Content deleted Content added
^m→Interpretation as inverse power iteration: The coefficients of P were introduced as a_i, use them in the companion matrix |
The Golden Ratio is about 1.618033… so if you round it to three decimals, it would be ≈1.62 and not ≈1.61. A better fix is to say ≈1.618 since that is an excellent approximation, correctly rounded. |
||
(6 intermediate revisions by 6 users not shown) | |||
Line 1:
{{Short description|Root-finding algorithm for polynomials}}
The '''Jenkins–Traub algorithm for polynomial zeros''' is a fast globally convergent iterative [[Root-finding algorithms#Roots of polynomials|polynomial root-finding]] method published in 1970 by [[Michael A. Jenkins]] and [[Joseph F. Traub]]. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".<ref>Press, W. H., Teukolsky, S. A., Vetterling, W. T. and Flannery, B. P. (2007), Numerical Recipes: The Art of Scientific Computing, 3rd ed., Cambridge University Press, page 470.</ref>
Line 18 ⟶ 19:
To that end, a sequence of so-called ''H'' polynomials is constructed. These polynomials are all of degree ''n'' − 1 and are supposed to converge to the factor <math>\bar H(X)</math> of ''P''(''X'') containing (the linear factors of) all the remaining roots. The sequence of ''H'' polynomials occurs in two variants, an unnormalized variant that allows easy theoretical insights and a normalized variant of <math>\bar H</math> polynomials that keeps the coefficients in a numerically sensible range.
The construction of the ''H'' polynomials <math>\left(H^{(\lambda)}(z)\right)_{\lambda=0,1,2,\dots}</math> is guided by a sequence of [[complex
<math display="block">
H^{(0)}(z)=P^\prime(z)
Line 183 ⟶ 184:
\frac{|\alpha_1-s_\lambda|^2}{|\alpha_2-s_\lambda|}
\right)
</math> giving rise to a higher than quadratic convergence order of <math>\phi^2=1+\phi\approx 2.
=== Interpretation as inverse power iteration ===
Line 192 ⟶ 193:
This maps polynomials of degree at most ''n'' − 1 to polynomials of degree at most ''n'' − 1. The eigenvalues of this map are the roots of ''P''(''X''), since the eigenvector equation reads
<math display="block">0 = (M_X-\alpha\cdot id)(H)=((X-\alpha)\cdot H) \bmod P\,,</math>
which implies that <math>(X-\alpha)\cdot H
<math display="block"> M_X(H) = \sum_{m=0}^{n-1}H_mX^{m+1}-H_{n-1}\left(X^n+\sum_{m=0}^{n-1}a_mX^m\right) = \sum_{m=1}^{n-1}(H_{m-1}-a_{m}H_{n-1})X^m-a_0H_{n-1}\,,</math>
the resulting transformation matrix is
Line 215 ⟶ 216:
The methods have been extensively tested by many people.{{Who|date=December 2021}} As predicted they enjoy faster than quadratic convergence for all distributions of zeros.
However, there are polynomials which can cause loss of precision<ref>{{Cite web|date=8 August 2005|title=William Kahan Oral history interview by Thomas Haigh|url=http://history.siam.org/oralhistories/kahan.htm
==References==
Line 228 ⟶ 229:
{{DEFAULTSORT:Jenkins-Traub Algorithm}}
[[Category:Numerical analysis]]
[[Category:
|