Content deleted Content added
m clears CS1 date error(s) (via WP:JWB) |
|||
(2 intermediate revisions by 2 users not shown) | |||
Line 1:
{{Short description|Algorithm to approximate functions}}
The '''Remez algorithm''' or '''Remez exchange algorithm''', published by [[Evgeny Yakovlevich Remez]] in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a [[Chebyshev space]] that are the best in the [[uniform norm]] ''L''<sub>∞</sub> sense.<ref>{{cite journal |author-link=Evgeny Yakovlevich Remez |first=E. Ya. |last=Remez |title=Sur la détermination des polynômes d'approximation de degré donnée |journal=Comm. Soc. Math. Kharkov |volume=10 |pages=41 |date=1934 }}<br/>{{cite journal |author-mask=1 |first=E. |last=Remes (Remez) |title=Sur un procédé convergent d'approximations successives pour déterminer les polynômes d'approximation |journal=Compt. Rend. Acad. Sci. |volume=198 |pages=2063–5 |language=fr |date=1934 |url=https://gallica.bnf.fr/ark:/12148/bpt6k31506/f2063.item}}<br/>{{cite journal |author-mask=1 |first=E. |last=Remes (Remez) |title=Sur le calcul effectif des polynomes d'approximation de Tschebyschef |journal=Compt. Rend. Acad. Sci. |volume=199 |issue= |pages=337–340 |language=fr |date=1934 |url=https://gallica.bnf.fr/ark:/12148/bpt6k3151h/f337.item}}</ref> It is sometimes referred to as '''Remes algorithm''' or '''Reme algorithm'''.<ref>{{Cite journal |last=Chiang |first=Yi-Ling F. |date=November 1988 |title=A Modified Remes Algorithm |url=https://epubs.siam.org/doi/10.1137/0909072 |journal=SIAM Journal on Scientific and Statistical Computing |volume=9 |issue=6 |pages=1058–1072 |doi=10.1137/0909072 |issn=0196-5204|url-access=subscription }}</ref>
A typical example of a Chebyshev space is the subspace of [[Chebyshev polynomials]] of order ''n'' in the [[Vector space|space]] of real [[continuous function]]s on an [[interval (mathematics)|interval]], ''C''[''a'', ''b'']. The polynomial of best approximation within a given subspace is defined to be the one that minimizes the maximum [[absolute difference]] between the polynomial and the function. In this case, the form of the solution is precised by the [[equioscillation theorem]].
Line 65:
<math>p_2(x)</math> to the ordinates <math>(-1)^i</math>
:<math>p_1(x_i) = f(x_i), p_2(x_i) = (-1)^i, i = 0, ..., n.</math>
To this end, use each time [[Newton polynomial|Newton's interpolation formula]] with the [[divided differences]] of order <math>0, ...,n</math> and <math>O(n^2)</math> arithmetic operations.
The polynomial <math>p_2(x)</math> has its ''i''-th zero between <math>x_{i-1}</math> and <math>x_i,\ i=1, ...,n</math>, and thus no further zeroes between <math>x_n</math> and <math>x_{n+1}</math>: <math>p_2(x_n)</math> and <math>p_2(x_{n+1})</math> have the same sign <math>(-1)^n</math>.
Line 83 ⟶ 82:
:<math>p(x_i) - f(x_i) \ = \ -(-1)^i E,\ \ i = 0, ... , n\!+\!1. </math>
The
<math>p(x)-\tilde p(x) = (p(x) - f(x)) - (\tilde p(x) - f(x))</math> would still be positive/negative at the ''n''+2 nodes <math>x_i</math> and therefore have at least ''n''+1 zeros which is impossible for a polynomial of degree ''n''.
Thus, this ''E'' is a lower bound for the minimum error which can be achieved with polynomials of degree ''n''.
Line 104 ⟶ 103:
==Variants==
Some modifications of the algorithm are present on the literature.<ref>{{Citation |last1=Egidi |first1=Nadaniela |title=A New Remez-Type Algorithm for Best Polynomial Approximation |date=2020 |url=http://link.springer.com/10.1007/978-3-030-39081-5_7 |work=Numerical Computations: Theory and Algorithms |volume=11973 |pages=56–69 |editor-last=Sergeyev |editor-first=Yaroslav D. |place=Cham |publisher=Springer |doi=10.1007/978-3-030-39081-5_7 |isbn=978-3-030-39080-8 |last2=Fatone |first2=Lorella |last3=Misici |first3=Luciano |s2cid=211159177 |editor2-last=Kvasov |editor2-first=Dmitri E.|url-access=subscription }}</ref> These include:
* Replacing more than one sample point with the locations of nearby maximum absolute differences.{{Citation needed|date=March 2022}}
Line 110 ⟶ 109:
* Using the relative error to measure the difference between the approximation and the function, especially if the approximation will be used to compute the function on a computer which uses [[floating point]] arithmetic;
* Including zero-error point constraints.<ref name="toobs" />
* The Fraser-Hart variant, used to determine the best rational Chebyshev approximation.<ref>{{Cite journal |last=Dunham |first=Charles B. |date=1975 |title=Convergence of the Fraser-Hart algorithm for rational Chebyshev approximation |url=https://www.ams.org/mcom/1975-29-132/S0025-5718-1975-0388732-9/ |journal=Mathematics of Computation |language=en |volume=29 |issue=132 |pages=1078–1082 |doi=10.1090/S0025-5718-1975-0388732-9 |issn=0025-5718|doi-access=free |url-access=subscription }}</ref>
== See also ==
|