Jenkins–Traub algorithm: Difference between revisions

Content deleted Content added
Line 139:
\left[
\prod_{\kappa=0}^{\lambda-1}\frac{\alpha_1-s_\kappa}{\alpha_m-s_\kappa}
\right]^{-1}
}\ .
</math>
 
If teh condition <math>|\alpha_1-s_\kappa|<\min{}_{m=2,3,\dots,n}|\alpha_m-s_\kappa|</math> holds for almost all iterates, the normalized H polynomials will converge at least geometrically towards <math>P_1(X)</math>
=== Convergence orders ===
If tehthe condition <math>|\alpha_1-s_\kappa|<\min{}_{m=2,3,\dots,n}|\alpha_m-s_\kappa|</math> holds for almost all iterates, the normalized H polynomials will converge at least geometrically towards <math>P_1(X)</math>.
 
Under the condition that
:<math>|\alpha_1|<|\alpha_2|=\min{}_{m=2,3,\dots,n}|\alpha_m|</math>
one gets the aymptotic estimates for
*stage 1:
*:<math>
H^{(\lambda)}(X)
=P_1(X)+O\left(\left|\frac{\alpha_1}{\alpha_2}\right|^\lambda\right)
</math>
*for stage 2, if ''s'' is close enough to <math>\alpha_1</math>:
*:<math>
H^{(\lambda)}(X)
=P_1(X)
+O\left(
\left|\frac{\alpha_1}{\alpha_2}\right|^M
\cdot
\left|\frac{\alpha_1-s}{\alpha_2-s}\right|^{\lambda-M}\right)
</math>
*:and
*:<math>
s-\frac{P(s)}{\bar H^{(\lambda)}(s)}
=\alpha_1+O\left(\ldots\cdot|\alpha_1-s|\right)</math>
*and for stage 3:
*:<math>
H^{(\lambda)}(X)
=P_1(X)
+O\left(\prod_{\kappa=0}^{\lambda-1}
\left|\frac{\alpha_1-s_\kappa}{\alpha_2-s_\kappa}\right|
\right)
</math>
*:and
*:<math>
s_{\lambda+1}=
s_\lambda-\frac{P(s)}{\bar H^{(\lambda+1)}(s_\lambda)}
=\alpha_1+O\left(\prod_{\kappa=0}^{\lambda-1}
\left|\frac{\alpha_1-s_\kappa}{\alpha_2-s_\kappa}\right|
\cdot
\frac{|\alpha_1-s_\lambda|^2}{|\alpha_2-s_\lambda|}
\right)
</math>
:giving rise to a higher than quadratic convergence order of <math>\phi^2=1+\phi\approx 2.61</math>, where <math>\phi=\tfrac12(1+\sqrt5)</math> is the [[golden ratio]].
 
=== As inverse power iteration ===