Content deleted Content added
No edit summary Tag: Reverted |
Maxeto0910 (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit Advanced mobile edit |
||
(16 intermediate revisions by 11 users not shown) | |||
Line 1:
{{Short description|Class of statistical models}}
{{Distinguish|
{{Regression bar}}
Line 10:
Ordinary linear regression predicts the [[expected value]] of a given unknown quantity (the ''response variable'', a [[random variable]]) as a [[linear combination]] of a set of observed values (''predictors''). This implies that a constant change in a predictor leads to a constant change in the response variable (i.e. a ''linear-response model''). This is appropriate when the response variable can vary, to a good approximation, indefinitely in either direction, or more generally for any quantity that only varies by a relatively small amount compared to the variation in the predictive variables, e.g. human heights.
However, these assumptions are inappropriate for some types of response variables.
Similarly, a model that predicts a probability of making a yes/no choice (a [[Bernoulli distribution|Bernoulli variable]]) is even less suitable as a linear-response model, since probabilities are bounded on both ends (they must be between 0 and 1). Imagine, for example, a model that predicts the likelihood of a given person going to the beach as a function of temperature. A reasonable model might predict, for example, that a change in 10 degrees makes a person two times more or less likely to go to the beach. But what does "twice as likely" mean in terms of a probability? It cannot literally mean to double the probability value (e.g. 50% becomes 100%, 75% becomes 150%, etc.). Rather, it is the ''[[odds ratio|odds]]'' that are doubling: from 2:1 odds, to 4:1 odds, to 8:1 odds, etc. Such a model is a ''log-odds or [[Logistic regression|logistic]] model''.
Line 48:
: <math> f_Y(y \mid \theta, \tau) = h(y,\tau) \exp \left(\frac{b(\theta)T(y) - A(\theta)}{d(\tau)} \right). \,\!</math>
<math>\boldsymbol\theta</math> is related to the mean of the distribution. If <math>\mathbf{b}(\boldsymbol\theta)</math> is the identity function, then the distribution is said to be in [[canonical form]] (or ''natural form''). Note that any distribution can be converted to canonical form by rewriting <math>\boldsymbol\theta</math> as <math>\boldsymbol\theta'</math> and then applying the transformation <math>\boldsymbol\theta = \mathbf{b}(\boldsymbol\theta')</math>. It is always possible to convert <math>A(\boldsymbol\theta)</math> in terms of the new parametrization, even if <math>\mathbf{b}(\boldsymbol\theta')</math> is not a [[one-to-one function]]; see comments in the page on [[exponential families]].
If, in addition, <math>\mathbf{T}(\mathbf{y})</math> :<math> \boldsymbol\mu = \operatorname{E}(\mathbf{y}) = \
For scalar <math>\mathbf{y}</math> and <math>\boldsymbol\theta</math>, this reduces to
Line 55 ⟶ 57:
Under this scenario, the variance of the distribution can be shown to be<ref>{{harvnb|McCullagh|Nelder|1989}}, Chapter 2.</ref>
:<math>\operatorname{Var}(\mathbf{y}) = \nabla^
For scalar <math>\mathbf{y}</math> and <math>\boldsymbol\theta</math>, this reduces to
Line 72 ⟶ 74:
The link function provides the relationship between the linear predictor and the [[Expected value|mean]] of the distribution function. There are many commonly used link functions, and their choice is informed by several considerations. There is always a well-defined ''canonical'' link function which is derived from the exponential of the response's [[density function]]. However, in some cases it makes sense to try to match the [[Domain of a function|___domain]] of the link function to the [[range of a function|range]] of the distribution function's mean, or use a non-canonical link function for algorithmic purposes, for example [[Probit model#Gibbs sampling|Bayesian probit regression]].
When using a distribution function with a canonical parameter <math>\theta,</math> the canonical link function is the function that expresses <math>\theta</math> in terms of <math>\mu,</math> i.e. <math>\theta =
Following is a table of several exponential-family distributions in common use and the data they are typically used for, along with the canonical link functions and their inverses (sometimes referred to as the mean function, as done here).
{| class="wikitable
|+ Common distributions with typical uses and canonical link functions
! Distribution !! Support of distribution !! Typical uses !! Link name !! Link function, <math>\mathbf{X}\boldsymbol{\beta}=g(\mu)\,\!</math> !! Mean function
|-
| [[normal distribution|Normal]]
| rowspan="2" |real: <math>(-\infty,+\infty)</math> || rowspan="2" |Linear-response data || rowspan="2" | Identity
| rowspan="2" |<math>\mathbf{X}\boldsymbol{\beta}=\mu\,\!</math> || rowspan="2" | <math>\mu=\mathbf{X}\boldsymbol{\beta}\,\!</math>
|-
| [[Laplace distribution|Laplace]]
|-
| [[exponential distribution|Exponential]]
Line 185 ⟶ 189:
where ''μ'' is a positive number denoting the expected number of events. If ''p'' represents the proportion of observations with at least one event, its complement
:<math>
and then
:<math>
A linear model requires the response variable to take values over the entire real line. Since ''μ'' must be positive, we can enforce that by taking the logarithm, and letting log(''μ'') be a linear model. This produces the "cloglog" transformation
Line 235 ⟶ 239:
The standard GLM assumes that the observations are [[uncorrelated]]. Extensions have been developed to allow for [[correlation]] between observations, as occurs for example in [[longitudinal studies]] and clustered designs:
* '''[[Generalized estimating equation]]s''' (GEEs) allow for the correlation between observations without the use of an explicit probability model for the origin of the correlations, so there is no explicit [[likelihood]]. They are suitable when the [[random effects]] and their variances are not of inherent interest, as they allow for the correlation without explaining its origin. The focus is on estimating the average response over the population ("population-averaged" effects) rather than the regression parameters that would enable prediction of the effect of changing one or more components of '''X''' on a given individual. GEEs are usually used in conjunction with [[Huber–White standard errors]].<ref>{{cite journal
|title = Models for Longitudinal Data: A Generalized Estimating Equation Approach |first1 = Scott L. |last1 = Zeger |last2 = Liang |first2 = Kung-Yee |last3 = Albert |first3 = Paul S. |author-link1=Scott Zeger |author-link2=Kung-Yee Liang |journal = Biometrics |volume = 44 |year = 1988 |pages = 1049–1060 |issue = 4
|doi = 10.2307/2531734
|pmid = 3233245
|