Content deleted Content added
Line 155:
With a specified variance function and link function we can develop, as alternatives to the log-[[likelihood function]], the [[score function]], and the [[Fisher information]], a '''[[Quasi-likelihood]]''', a '''Quasi-score''', and the '''Quasi-Information'''. This allows for full inference of <math>\beta</math>.
:'''Quasi-Likelihood (QL)'''
Though called a [[Quasi-likelihood]], this is in fact a quasi-'''log'''-likelihood. The QL for one observation is
And therefore the QL for all '''n''' observations is,
:<math>Q(\mu,y) = \sum_{i=1}^n{Q_i(\mu_i,y_i)} = \sum_{i=1}^n {\int_{y_i}^{\mu_i}{\frac{y-t}{\sigma^2}V(t)}dt}</math>
From the '''QL''' we have the '''Quasi-Score'''
:'''Quasi-Score'''
▲:<math>U = \frac{y-\mu}{\sigma^2V(\mu)}</math>
Recall the [[score function]], '''U''', for data with log-likelihood <math>\operatorname{l}(\mu|y)</math> is
:<math>U = \frac{\partial l }{d\mu}</math>. We obtain the Quasi-Score in an identical manner,
:<math>U = \frac{y-\mu}{\sigma^2V(\mu)}</math>
Noting that, for one observation the score is
:<math>\frac{\partial Q}{\partial \mu} = \frac{y-\mu}{\sigma^2V(\mu)}</math>
The first two Bartlett equations are satisfied for the Quasi-Score, namely
:<math> E[U] = 0 </math> and
Line 168 ⟶ 177:
In addition, the the quasi-score is linear in '''y'''.
Ultimately the goal is to find information about the parameters of interest <math>\beta</math>. Both the Quasi-Score and the QL are actually functions of <math>\beta</math>. Recall, <math>\mu = g^{-1}(\eta)</math>, and <math>\eta = X\beta</math>, therefore,
:<math>\mu = g^{-1}(X\beta).</math>
'''Quasi-Information'''
The '''quasi-information''', is similar to the [[Fisher information]],
:<math>i_b = -\frac{\partial U}{\partial \beta}</math>
The QL, QS and QI all provide the building blocks for inference about the parameters of interest. We use the QL, QS and QI all as functions of <math>\beta</math>,
:<math>U(\beta) = \begin{bmatrix} U_1(\beta)\\
U_2(\beta)\\
\vdots\\
\vdots\\
U_p(\beta)
\end{bmatrix} = D^TV^{-1}\frac{(y-\mu)}{\sigma^2}</math>
Where,
:<math>\underbrace{D}_{n x p} = \begin{bmatrix} \frac{\partial \mu_1}{\partial \beta_1} &\cdots&\cdots&\frac{\partial \mu_1}{\partial \beta_p} \\
\frac{\partial \mu_2}{\partial \beta_1} &\cdots&\cdots&\frac{\partial \mu_2}{\partial \beta_p} \\
\vdots\\
\vdots\\
\frac{\partial \mu_m}{\partial \beta_1} &\cdots&\cdots&\frac{\partial \mu_m}{\partial \beta_p} \end{bmatrix}</math>
<math> \underbrace{V}_{nxn} = \text{diag}(V(\mu_1),V(\mu_2),\cdots,\cdots,V(\mu_n))</math>
===Non-Parametric Regression Analysis===
|