Content deleted Content added
Scientific29 (talk | contribs) →General: clean-up previous edit |
No edit summary |
||
(40 intermediate revisions by 23 users not shown) | |||
Line 1:
{{Short description|Regression analysis}}
{{Regression bar}}
[[Image:Michaelis-Menten saturation curve of an enzyme reaction.svg|thumb|300 px| See [[
In statistics, '''nonlinear regression''' is a form of [[regression analysis]] in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations).
==General ==
In nonlinear regression, a [[statistical model]] of the form,
relates a vector of [[independent variables]],
This function, which is a rectangular hyperbola, is {{em|nonlinear}} because it cannot be expressed as a [[linear combination]] of the two
▲This function is nonlinear because it cannot be expressed as a [[linear combination]] of the two ''<math>\beta</math>''s.
[[Systematic error]] may be present in the independent variables but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is an [[errors-in-variables model]], also outside this scope.
Other examples of nonlinear functions include [[exponential function]]s, [[Logarithmic growth|logarithmic functions]], [[trigonometric functions]], [[Exponentiation|power functions]], [[Gaussian function]], and [[
In general, there is no closed-form expression for the best-fitting parameters, as there is in [[linear regression]]. Usually numerical [[Optimization (mathematics)|optimization]] algorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be many [[local maximum|local minima]] of the function to be optimized and even the global minimum may produce a [[Bias of an estimator|biased]] estimate. In practice, [[guess value|estimated values]] of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares.
Line 26 ⟶ 24:
==Regression statistics==
The assumption underlying this procedure is that the model can be approximated by a linear function
:<math>\hat{\boldsymbol{\beta}} \approx \mathbf { (J^TJ)^{-1}J^Ty}.</math>▼
where <math>J_{ij} = \frac{\partial f(x_i,\boldsymbol\beta)}{\partial \beta_j}</math> are Jacobian matrix elements. It follows from this that the least squares estimators are given by
The nonlinear regression statistics are computed and used as in linear regression statistics, but using '''J''' in place of '''X''' in the formulas. The linear approximation introduces [[bias]] into the statistics. Therefore, more caution than usual is required in interpreting statistics derived from a nonlinear model.▼
compare [[generalized least squares]] with covariance matrix proportional to the unit matrix. The nonlinear regression statistics are computed and used as in linear regression statistics, but using '''J''' in place of '''X''' in the formulas.
When the function <math>f(x_i,\boldsymbol\beta)</math> itself is not known analytically, but needs to be [[Linear regression|linearly approximated]] from <math>n+1</math>, or more, known values (where <math>n</math> is the number of estimators), the best estimator is obtained directly from the [[Linear Template Fit]] as <ref>{{cite journal | title=The Linear Template Fit | last=Britzger | first=Daniel | journal=Eur. Phys. J. C | volume=82 | year=2022 | issue=8 |pages=731 | doi=10.1140/epjc/s10052-022-10581-w | arxiv=2112.01548| bibcode=2022EPJC...82..731B }}</ref><math display="block"> \hat{\boldsymbol\beta} = ((\mathbf{Y\tilde{M}})^\mathsf{T} \boldsymbol\Omega^{-1} \mathbf{Y\tilde{M}})^{-1}(\mathbf{Y\tilde{M}})^\mathsf{T}\boldsymbol\Omega^{-1}(\mathbf{d}-\mathbf{Y\bar{m})}</math> (see also [[Linear_least_squares#Alternative_formulations|linear least squares]]).
▲
==Ordinary and weighted least squares==
The best-fit curve is often assumed to be that which minimizes the sum of squared [[errors and residuals in statistics|residuals]]. This is the
==Linearization==
===Transformation===
{{further|Data transformation (statistics)}}
Some nonlinear regression problems can be moved to a linear ___domain by a suitable transformation of the model formulation.
For example, consider the nonlinear regression problem
with parameters ''a'' and ''b'' and with multiplicative error term ''U''. If we take the logarithm of both sides, this becomes
where ''u'' = ln(''U''), suggesting estimation of the unknown parameters by a linear regression of ln(''y'') on ''x'', a computation that does not require iterative optimization. However, use of a nonlinear transformation requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, a nonlinear transformation may distribute the errors in a Gaussian fashion, so the choice to perform a nonlinear transformation must be informed by modeling considerations.
Line 52 ⟶ 59:
For [[Michaelis–Menten kinetics]], the linear [[Lineweaver–Burk plot]]
of 1/''v'' against 1/[''S''] has been much used. However, since it is very sensitive to data error and is strongly biased toward fitting the data in a particular range of the independent variable, [''S''], its use is strongly discouraged.
Line 68 ⟶ 75:
==See also==
{{Portal|
* [[Non-linear least squares]]
* [[Curve fitting]]
* [[Generalized linear model]]
* [[Local regression]]
* [[Response modeling methodology]]
* [[Genetic programming]]
* [[Multi expression programming]]
* [[Linear_least_squares#Alternative_formulations|Linear or quadratic template fit]]
==Notes==
{{notelist}}
==References==
Line 78 ⟶ 92:
==Further reading==
*{{cite book |
*{{cite journal |
*{{cite book |first=K. |last=Schittkowski |title=Data Fitting in Dynamical Systems |publisher=Kluwer |___location=Boston |year=2002 |isbn=1402010796 }}
*{{cite book |
{{Statistics}}
|