Content deleted Content added
m Reverted edits by 212.71.37.72 (talk) to last version by Sceptre |
m →Bibliography: parameter misuse; |
||
(55 intermediate revisions by 36 users not shown) | |||
Line 2:
==Polynomial function models==
{{
A [[polynomial function]] is one that has the form
Line 26:
#Polynomial models have poor [[extrapolation|extrapolatory]] properties. Polynomials may provide good fits within the range of data, but they will frequently deteriorate rapidly outside the range of the data.
#Polynomial models have poor [[asymptote|asymptotic]] properties. By their nature, polynomials have a finite response for finite ''x'' values and have an infinite response if and only if the ''x'' value is infinite. Thus polynomials may not model asymptotic phenomena very well.
#While no procedure is immune to the [[bias of an estimator|bias]]-[[variance]] tradeoff, polynomial models exhibit a particularly poor tradeoff between shape and degree. In order to model data with a complicated structure, the degree of the model must be high, indicating
When modeling via polynomial functions is inadequate due to any of the limitations above, the use of rational functions for modeling may give a better fit.
==Rational function models==
A [[rational function]] is simply the ratio of two polynomial functions.
:<math>
y = \frac{a_{n}x^{n} + a_{n-1}x^{n-1} + \ldots + a_{2}x^{2} + a_{1}x + a_{0}} {b_{m}x^{m} + b_{m-1}x^{m-1} + \ldots + b_{2}x^{2} + b_{1}x + b_{0}}
</math>
with ''n'' denoting a non-negative integer that defines the degree of the numerator and ''m'' denoting a non-negative integer that defines the degree of the denominator. For fitting rational function models, the constant term in the denominator is usually set to 1. Rational functions are typically identified by the degrees of the numerator and denominator. For example, a quadratic for the numerator and a cubic for the denominator is identified as a quadratic/cubic rational function. The rational function model is a generalization of the polynomial model: rational function models contain polynomial models as a subset (i.e., the case when the denominator is a constant).
===Advantages===
Rational function models have the following advantages:
#Rational function models have a moderately simple form.
#Rational function models are a closed family. As with polynomial models, this means that rational function models are not dependent on the underlying metric.
#Rational function models can take on an extremely wide range of shapes, accommodating a much wider range of shapes than does the polynomial family.
#Rational function models have better interpolatory properties than polynomial models. Rational functions are typically smoother and less oscillatory than polynomial models.
#Rational functions have excellent extrapolatory powers. Rational functions can typically be tailored to model the function not only within the ___domain of the data, but also so as to be in agreement with theoretical/asymptotic behavior outside the ___domain of interest.
#Rational function models have excellent asymptotic properties. Rational functions can be either finite or infinite for finite values, or finite or infinite for infinite ''x'' values. Thus, rational functions can easily be incorporated into a rational function model.
#Rational function models can often be used to model complicated structure with a fairly low degree in both the numerator and denominator. This in turn means that fewer coefficients will be required compared to the polynomial model.
#Rational function models are moderately easy to handle computationally. Although they are [[nonlinear regression|nonlinear models]], rational function models are particularly easy nonlinear models to fit.
#One common difficulty in fitting nonlinear models is finding adequate starting values. A major advantage of rational function models is the ability to compute starting values using a [[Ordinary least squares|linear least squares]] fit. To do this, ''p'' points are chosen from the data set, with ''p'' denoting the number of parameters in the rational model. For example, given the linear/quadratic model
:::<math>y=\frac{A_0 + A_1x} {1 + B_1x + B_2x^{2}} ,</math>
::one would need to select four representative points, and perform a linear fit on the model
:::<math>
y = A_0 + A_1x - B_1xy - B_2x^2y ,
</math>
::which is derived from the previous equation by clearing the denominator. Here, the ''x'' and ''y'' contain the subset of points, not the full data set. The estimated coefficients from this linear fit are used as the starting values for fitting the nonlinear model to the full data set.
::This type of fit, with the response variable appearing on both sides of the function, should only be used to obtain starting values for the nonlinear fit. The statistical properties of fits like this are not well understood.
::The subset of points should be selected over the range of the data. It is not critical which points are selected, although obvious outliers should be avoided.
===Disadvantages===
Rational function models have the following disadvantages:
#The properties of the rational function family are not as well known to engineers and scientists as are those of the polynomial family. The literature on the rational function family is also more limited. Because the properties of the family are often not well understood, it can be difficult to answer the following modeling question: ''Given that data has a certain shape, what values should be chosen for the degree of the numerator and the degree on the denominator?''
#Unconstrained rational function fitting can, at times, result in undesired vertical [[asymptote]]s due to roots in the denominator polynomial. The range of ''x'' values affected by the function "blowing up" may be quite narrow, but such asymptotes, when they occur, are a nuisance for local interpolation in the neighborhood of the asymptote point. These asymptotes are easy to detect by a simple plot of the fitted function over the range of the data. These nuisance asymptotes occur occasionally and unpredictably, but practitioners argue that the gain in flexibility of shapes is well worth the chance that they may occur, and that such asymptotes should not discourage choosing rational function models for empirical modeling.
==See also==
* [[Response surface methodology]]
* [[Padé approximant|Pade Approximant]]
==Bibliography==
* {{cite book |author=Atkinson, A. C. |author2=Donev, A. N. |author3=Tobias, R. D.|title=Optimum Experimental Designs, with SAS| url=https://books.google.com/books?id=oIHsrw6NBmoC| publisher=Oxford University Press|year=2007 |pages=511+xvi |isbn=978-0-19-929660-6 }}
* Box, G. E. P. and Draper, Norman. 2007. ''Response Surfaces, Mixtures, and Ridge Analyses'', Second Edition [of ''Empirical Model-Building and Response Surfaces'', 1987], Wiley.
* {{cite book |author-link=Jack Kiefer (statistician)| last=Kiefer| first=Jack Carl| title=Collected Papers III Design of Experiments |editor-link=Lawrence D. Brown| editor=L. D. Brown|publisher=Springer-Verlag|year=1985|isbn=978-0-387-96004-3|display-editors=etal}}
* R. H. Hardin and [[Neil Sloane|N. J. A. Sloane]], [http://neilsloane.com/doc/design.pdf "A New Approach to the Construction of Optimal Designs", ''Journal of Statistical Planning and Inference'', vol. 37, 1993, pp. 339-369]
* R. H. Hardin and [[Neil Sloane|N. J. A. Sloane]], [http://neilsloane.com/doc/doeh.pdf "Computer-Generated Minimal (and Larger) Response Surface Designs: (I) The Sphere"]
* R. H. Hardin and [[Neil Sloane|N. J. A. Sloane]], [http://neilsloane.com/doc/meatball.pdf "Computer-Generated Minimal (and Larger) Response Surface Designs: (II) The Cube"]
* {{Cite book| title=Design and Analysis of Experiments | series=Handbook of Statistics| volume=13|editor=Ghosh, S. |editor2=Rao, C. R. |editor2-link=Calyampudi Radhakrishna Rao | publisher=North-Holland| year=1996| isbn=978-0-444-82061-7}}
** {{Cite book|author1=Draper, Norman |author2=Lin, Dennis K. J. |name-list-style=amp | chapter=Response Surface Designs |pages=343–375}}
** {{Cite book|author1=Gaffke, N. |author2=Heiligers, B |name-list-style=amp | chapter=Approximate Designs for [[Linear regression|Polynomial Regression]]: [[Invariant estimator|Invariance]], [[Admissible decision rule|Admissibility]], and [[Optimal design|Optimality]] |pages=1149–1199}}
* {{cite book |author=Melas, Viatcheslav B.|title=Functional Approach to Optimal Experimental Design |series=Lecture Notes in Statistics| volume=184 | publisher=Springer-Verlag | year=2006 |isbn=978-0-387-98741-5}} (Modeling with rational functions)
===Historical===
*{{cite journal
|title=Application de la méthode des moindre quarrés a l'interpolation des suites<!-- [The application of the method of least squares to the interpolation of sequences] -->
|author=Gergonne, J. D.
|journal=[[Annales de mathématiques pures et appliquées]]
|volume=6
|year=1815
|pages=242–252
|author-link=Joseph Diaz Gergonne
}}
*{{cite journal
|title=The application of the method of least squares to the interpolation of sequences
|author=Gergonne, J. D.
|journal=Historia Mathematica
|volume=1
|issue=4 <!-- |month=November -->
|year=1974 |orig-year=1815
|pages=439–447
|edition=Translated by Ralph St. John and [[Stephen M. Stigler|S. M. Stigler]] from the 1815 French
|doi=10.1016/0315-0860(74)90034-2
|author-link=Joseph Diaz Gergonne
|doi-access=free
}}
*{{cite journal
|title=Gergonne's 1815 paper on the design and analysis of polynomial regression experiments
|author=Stigler, Stephen M.
|journal=[[Historia Mathematica]]
|volume=1
|issue=4 <!-- |month=November -->
|year=1974
|pages=431–439
|doi=10.1016/0315-0860(74)90033-0
|author-link=Stephen M. Stigler
|doi-access=free
}}
* {{cite journal
|author=Smith, Kirstine
|title=On the Standard Deviations of Adjusted and Interpolated Values of an Observed Polynomial Function and its Constants and the Guidance They Give Towards a Proper Choice of the Distribution of the Observations
|year=1918
|journal=[[Biometrika]]
|volume=12
|issue=1/2
|pages=1–85
|jstor=2331929
|doi=10.1093/biomet/12.1-2.1}}
==External links==
*[http://www.itl.nist.gov/div898/handbook/pmd/section6/pmd642.htm Rational Function Models]
{{Least squares and regression analysis}}
{{Statistics}}
{{NIST-PD}}
[[Category:
[[Category:Interpolation]]
[[Category:Statistical ratios]]
|