:giving rise to a higher than quadratic convergence order of <math>\phi^2=1+\phi\approx 2.61</math>, where <math>\phi=\tfrac12(1+\sqrt5)</math> is the [[golden ratio]].
==== As inverse power iteration ====
All stages of the Jenkins-Traub complex algorithm may be represented as the linear algebra problem of determining the eigenvalues of a special matrix. This matrix is the coordinate representation of a linear map in the ''n''-dimensional space of polynomials of degree ''n-1'' or less. The principal idea of this map is to interpret the factorization
0 & 0 & \dots & 1 & -P_{n-1} \\
\end{pmatrix}\,.</math>
To this matrix the [[inverse power iteration]] is applied in the three variants of no shift, constant shift and generalized Rayleight shift in the three stages of the algorithm. It is more efficient to perform the linear algebra operations in polynomial arithmetic and not by matrix operations, however, the properties of the inverse power iteration remain the same.
''Stage 1:'' Here the inverse power iteration without shift
:<math>H^{k+1}=M_X{}^{-1}(H^k)</math> resp. <math>\bar H^{k+1}=q_k\cdot M_X{}^{-1}(\bar H^k)</math>,
is used. In the second variant <math>\bar H^k</math> is a normalized multiple of <math>H^k</math>, with the normalization fixing the norm of the vector or some coordinate of it to be 1. This variant approximates the smallest eigenvalue, that is the smallest root of ''P(X)''. The convergence is of linear order with a factor that is the quotient of the smallest (in absolute value) and next-to-smallest eigenvalue. This only works if the there is only one eigenvalue with a minimal distance to zero.
''Stage 2:'' This step uses the inverse power iteration in the variant with a constant shift ''s''. In consequnce, it is computing the smallest eigenvalue of <math>M_X-s\cdot id</math> by
:<math>H^{k+1}=(M_X-s\cdot id)^{-1}(H^k)\,,</math> resp. <math>\bar H^{k+1}=q_k\cdot (M_X-s\cdot id)^{-1}(\bar H^k)</math>,
If the shift ''s'' is close to the smallest eigenvalue <math>\alpha_1</math>, then the smallest eigenvalue of the shifted matrix is <math>\alpha_1-s</math> which is close to zero. The convergence has an order of
:<math>\alpha_1-s-q_k=O\left(\left(\tfrac{|\alpha_1-s|}{|\alpha_2-s|}\right)^k\right)</math>
which is the faster the smaller <math>|\alpha_1-s|<\!<|\alpha_2-s|=\min{}_{j=2,\dots,n}|\alpha_j-s|</math> is.
''Stage 3:'' Here the shift is updated every step using
:<math>H^{k+1}=(M_X-s_k\cdot id)^{-1}(H^k)\,,</math> and <math>H_1^{k+1}=(M_X-s_k\cdot id)^{-1}(H^{k+1})\,,</math>
to obtain an update to <math>s_k</math> from the growth factor between <math>H^{k+1}</math> and <math>H_1^{k+1}</math>,
:<math>s_{k+1}=s_k+\tfrac{L(H^{k+1})}{L(H_1^{k+1})}</math>
with some linear functional ''L''. If <math>\delta_k</math> denotes the distance of the normalized iteration vector <math>\bar H^k</math> from the eigenspace of <math>\alpha_1</math>, then one gets the for the convergence speed the recursions
:<math>\alpha_1-s_{k+1}=O(\delta_k\,|\alpha_1-s_k|^2)</math> and <math>\delta_{k+1}=O(\delta_k\cdot |\alpha_1-s_k|)</math>
resulting in an asymptotic convergence order of <math>\phi^2=1+\phi\approx 2.61</math>, where <math>\phi=\tfrac{1+\sqrt{5}}2</math> is the [[golden ratio]].
To solve the polynomial identity representing the inverse power iteration, one has to find a factor <math>Q(X)</math> such that
:<math>H^k(X)=((X-s_k)\cdot H^{k+1}(X) +Q(X)\cdot P(X))\,.</math>
Evaluating at <math>X=s_k</math> one arrives at <math>H^k(s_k)=Q(s_k)P(s_k)</math>, and by degree computation one concludes that <math>Q(X)=Q(s_k)</math> is a constant polynomial. Then the equation is solved by exact polynomial division giving
:<math>H^{k+1}(X)
=\frac1{X-s_k}\,(H^k(X)-Q(s_k)\,P(X))
=\frac1{X-s_k}\,\left(H^k(X)-\tfrac{H^k(s_k)}{P(s_k)}\,P(X)\right)\,,
</math>
which is the unnormalized iteration for the polynomial sequence <math>(H^k(X))_{k\in\N}</math>. Comparing the leading coefficients (LC) of this sequence one finds that
:<math>LC(H^{k+1})=-Q(s_k)
=-\frac{H^k(s_k)}{P(s_k)}
=-LC(H^k)\,\tfrac{\bar H^k(s_k)}{P(s_k)}\,,
</math>
such that the iteration for the normalized ''H'' polynomials reads as
:<math>\bar H^{k+1}(X)
=\frac1{X-s_k}\,(P(X)-H^k(X)/Q(s_k))
=\frac1{X-s_k}\,\left(P(X)-\tfrac{P(s_k)}{\bar H^k(s_k)}\,\bar H^k(X)\right)\,.
</math>
In the additional evaluation of the third stage one only needs the fraction of the leading coefficients,
:<math>LC(H_1^{k+1}(X))
=-\tfrac{H^{k+1}(s_k)}{P(s_k)}
=-LC(H^{k+1}(X))\,\tfrac{\bar H^{k+1}(s_k)}{P(s_k)}\,,</math>
from where the update formula
:<math>s_{k+1}
=s_k+\tfrac{LC(H^{k+1}(X))}{LC(H_1^{k+1}(X))}
=s_k-\tfrac{P(s_k)}{\bar H^{k+1}(s_k)}
</math> results.
==Real coefficients==
|