Generalized linear model: Difference between revisions

Content deleted Content added
m Probability distribution: Language revision that allows readers to understand the conditions under which the equations written below are true. Crosschecked expressions with ref. 2 in the article
No edit summary
Tags: Mobile edit Mobile web edit Advanced mobile edit
 
(7 intermediate revisions by 6 users not shown)
Line 1:
{{Short description|Class of statistical models}}
{{Distinguish|generalGeneral linear model|generalizedGeneralized least squares}}
{{Regression bar}}
 
Line 10:
Ordinary linear regression predicts the [[expected value]] of a given unknown quantity (the ''response variable'', a [[random variable]]) as a [[linear combination]] of a set of observed values (''predictors''). This implies that a constant change in a predictor leads to a constant change in the response variable (i.e. a ''linear-response model''). This is appropriate when the response variable can vary, to a good approximation, indefinitely in either direction, or more generally for any quantity that only varies by a relatively small amount compared to the variation in the predictive variables, e.g. human heights.
 
However, these assumptions are inappropriate for some types of response variables. For example, in cases where the response variable is expected to be always positive and varying over a wide range, constant input changes lead to geometrically (i.e. exponentially) varying, rather than constantly varying, output changes. As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different differently-sized beaches. More specifically, the problem is that if you use the model is used to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, youit would predict an impossible attendance value of −950. Logically, a more realistic model would instead predict a constant ''rate'' of increased beach attendance (e.g. an increase of 10 degrees leads to a doubling in beach attendance, and a drop of 10 degrees leads to a halving in attendance). Such a model is termed an ''exponential-response model'' (or ''[[log-linear model]]'', since the [[logarithm]] of the response is predicted to vary linearly).
 
Similarly, a model that predicts a probability of making a yes/no choice (a [[Bernoulli distribution|Bernoulli variable]]) is even less suitable as a linear-response model, since probabilities are bounded on both ends (they must be between 0 and 1). Imagine, for example, a model that predicts the likelihood of a given person going to the beach as a function of temperature. A reasonable model might predict, for example, that a change in 10 degrees makes a person two times more or less likely to go to the beach. But what does "twice as likely" mean in terms of a probability? It cannot literally mean to double the probability value (e.g. 50% becomes 100%, 75% becomes 150%, etc.). Rather, it is the ''[[odds ratio|odds]]'' that are doubling: from 2:1 odds, to 4:1 odds, to 8:1 odds, etc. Such a model is a ''log-odds or [[Logistic regression|logistic]] model''.
Line 48:
: <math> f_Y(y \mid \theta, \tau) = h(y,\tau) \exp \left(\frac{b(\theta)T(y) - A(\theta)}{d(\tau)} \right). \,\!</math>
 
<math>\boldsymbol\theta</math> is related to the mean of the distribution. If <math>\mathbf{b}(\boldsymbol\theta)</math> is the identity function, then the distribution is said to be in [[canonical form]] (or ''natural form''). Note that any distribution can be converted to canonical form by rewriting <math>\boldsymbol\theta</math> as <math>\boldsymbol\theta'</math> and then applying the transformation <math>\boldsymbol\theta = \mathbf{b}(\boldsymbol\theta')</math>. It is always possible to convert <math>A(\boldsymbol\theta)</math> in terms of the new parametrization, even if <math>\mathbf{b}(\boldsymbol\theta')</math> is not a [[one-to-one function]]; see comments in the page on [[exponential families]].

If, in addition, <math>\mathbf{T}(\mathbf{y})</math> and <math>\mathbf{b}(\boldsymbol\theta)</math> are the identity, then <math>\boldsymbol\theta</math> is called the ''canonical parameter'' (or ''natural parameter'') and is related to the mean through
:<math> \boldsymbol\mu = \operatorname{E}(\mathbf{y}) = \nabla_{\boldsymbol{\theta}} A(\boldsymbol\theta). \,\!</math>
 
Line 72 ⟶ 74:
The link function provides the relationship between the linear predictor and the [[Expected value|mean]] of the distribution function. There are many commonly used link functions, and their choice is informed by several considerations. There is always a well-defined ''canonical'' link function which is derived from the exponential of the response's [[density function]]. However, in some cases it makes sense to try to match the [[Domain of a function|___domain]] of the link function to the [[range of a function|range]] of the distribution function's mean, or use a non-canonical link function for algorithmic purposes, for example [[Probit model#Gibbs sampling|Bayesian probit regression]].
 
When using a distribution function with a canonical parameter <math>\theta,</math> the canonical link function is the function that expresses <math>\theta</math> in terms of <math>\mu,</math> i.e. <math>\theta = bg(\mu).</math> For the most common distributions, the mean <math>\mu</math> is one of the parameters in the standard form of the distribution's [[density function]], and then <math>bg(\mu)</math> is the function as defined above that maps the density function into its canonical form. When using the canonical link function, <math>bg(\mu) = \theta = \mathbf{X}\boldsymbol{\beta},</math> which allows <math>\mathbf{X}^{\rm T} \mathbf{Y}</math> to be a [[sufficiency (statistics)|sufficient statistic]] for <math>\boldsymbol{\beta}</math>.
 
Following is a table of several exponential-family distributions in common use and the data they are typically used for, along with the canonical link functions and their inverses (sometimes referred to as the mean function, as done here).
 
{| class="wikitable" style="background:white;"
|+ Common distributions with typical uses and canonical link functions
! Distribution !! Support of distribution !! Typical uses !! Link name !! Link function, <math>\mathbf{X}\boldsymbol{\beta}=g(\mu)\,\!</math> !! Mean function
|-
| [[normal distribution|Normal]]
| rowspan="2" |real: <math>(-\infty,+\infty)</math> || rowspan="2" |Linear-response data || rowspan="2" | Identity
| rowspan="2" |<math>\mathbf{X}\boldsymbol{\beta}=\mu\,\!</math> || rowspan="2" | <math>\mu=\mathbf{X}\boldsymbol{\beta}\,\!</math>
|-
| [[Laplace distribution|Laplace]]
|-
| [[exponential distribution|Exponential]]