Content deleted Content added
Citation bot (talk | contribs) Add: pmid, doi-access, bibcode, arxiv, issue, authors 1-1. Removed URL that duplicated identifier. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 34/60 |
No edit summary |
||
(3 intermediate revisions by 3 users not shown) | |||
Line 1:
{{Short description|Regression analysis}}
{{Regression bar}}
[[Image:Michaelis-Menten saturation curve of an enzyme reaction.svg|thumb|300 px| See [[Michaelis–Menten kinetics]] for details
In statistics, '''nonlinear regression''' is a form of [[regression analysis]] in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations).
Line 7:
In nonlinear regression, a [[statistical model]] of the form,
relates a vector of [[independent variables]], <math>\mathbf{x}</math>, and its associated observed [[dependent variables]], <math>\mathbf{y}</math>. The function <math>f</math> is nonlinear in the components of the vector of parameters <math>\beta</math>, but otherwise arbitrary. For example, the [[Michaelis–Menten]] model for enzyme kinetics has two parameters and one independent variable, related by <math>f</math> by:{{efn|This model can also be expressed in the conventional biological notation: <math display="block"> v = \frac{V_\max\ [\mathrm{S}]}{K_m + [\mathrm{S}]} </math>}}
This function, which is a rectangular hyperbola, is
▲This function, which is a rectangular hyperbola, is ''nonlinear'' because it cannot be expressed as a [[linear combination]] of the two ''<math>\beta</math>''s.
[[Systematic error]] may be present in the independent variables but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is an [[errors-in-variables model]], also outside this scope.
Other examples of nonlinear functions include [[exponential function]]s, [[Logarithmic growth|logarithmic functions]], [[trigonometric functions]], [[Exponentiation|power functions]], [[Gaussian function]], and [[Cauchy distribution|Lorentz distribution]]s. Some functions, such as the exponential or logarithmic functions, can be transformed so that they are linear. When so transformed, standard linear regression can be performed but must be applied with caution. See
In general, there is no closed-form expression for the best-fitting parameters, as there is in [[linear regression]]. Usually numerical [[Optimization (mathematics)|optimization]] algorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be many [[local maximum|local minima]] of the function to be optimized and even the global minimum may produce a [[Bias of an estimator|biased]] estimate. In practice, [[guess value|estimated values]] of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares.
Line 29 ⟶ 26:
The assumption underlying this procedure is that the model can be approximated by a linear function, namely a first-order [[Taylor series]]:
where <math>J_{ij} = \frac{\partial f(x_i,\boldsymbol\beta)}{\partial \beta_j}</math> are Jacobian matrix elements. It follows from this that the least squares estimators are given by
compare [[generalized least squares]] with covariance matrix proportional to the unit matrix. The nonlinear regression statistics are computed and used as in linear regression statistics, but using '''J''' in place of '''X''' in the formulas.
Line 42 ⟶ 39:
==Ordinary and weighted least squares==
The best-fit curve is often assumed to be that which minimizes the sum of squared [[errors and residuals in statistics|residuals]]. This is the [[ordinary least squares]] (OLS) approach. However, in cases where the dependent variable does not have constant variance, or there are some outliers, a sum of weighted squared residuals may be minimized; see [[weighted least squares]]. Each weight should ideally be equal to the reciprocal of the variance of the observation, or the reciprocal of the dependent variable to some power in the outlier case
==Linearization==
Line 67 ⟶ 49:
For example, consider the nonlinear regression problem
with parameters ''a'' and ''b'' and with multiplicative error term ''U''. If we take the logarithm of both sides, this becomes
where ''u'' = ln(''U''), suggesting estimation of the unknown parameters by a linear regression of ln(''y'') on ''x'', a computation that does not require iterative optimization. However, use of a nonlinear transformation requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, a nonlinear transformation may distribute the errors in a Gaussian fashion, so the choice to perform a nonlinear transformation must be informed by modeling considerations.
Line 77 ⟶ 59:
For [[Michaelis–Menten kinetics]], the linear [[Lineweaver–Burk plot]]
of 1/''v'' against 1/[''S''] has been much used. However, since it is very sensitive to data error and is strongly biased toward fitting the data in a particular range of the independent variable, [''S''], its use is strongly discouraged.
Line 99 ⟶ 81:
* [[Local regression]]
* [[Response modeling methodology]]
* [[Genetic
* [[Multi expression programming]]
* [[Linear_least_squares#Alternative_formulations|Linear or quadratic template fit]]
==References==▼
{{Reflist}}▼
==Notes==
{{notelist}}
▲==References==
▲{{Reflist}}
==Further reading==
|