Information matrix test: Difference between revisions

Content deleted Content added
Monkbot (talk | contribs)
m Task 18 (cosmetic): eval 5 templates: hyphenate params (2×);
mNo edit summary
 
(One intermediate revision by one other user not shown)
Line 1:
In [[econometrics]], the '''information matrix test''' is used to determine whether a [[regression model]] is [[Statistical model specification|misspecified]]. The test was developed by [[Halbert White]],<ref>{{Cite journal |last1=White|first1=Halbert|title=Maximum Likelihood Estimation of Misspecified Models |journal=[[Econometrica]] |date=1982 |volume=50 |issue=1 |pages=1–25 |doi=10.2307/1912526 |jstor=1912526 }}</ref> who observed that in a correctly specified model and under standard regularity assumptions, the [[Fisher information matrix]] can be expressed in either of two ways: as the [[outer product]] of the [[gradient]] of the log-likelihood function, or as a function of theits [[Hessian matrix]] of the log-likelihood function.
 
Consider a linear model <math>\mathbf{y} = \mathbf{X} \mathbf{\beta} + \mathbf{u}</math>, where the errors <math>\mathbf{u}</math> are assumed to be distributed <math>\mathrm{N}(0, \sigma^2 \mathbf{I})</math>. If the parameters <math>\beta</math> and <math>\sigma^2</math> are stacked in the vector <math>\mathbf{\theta}^{\mathsf{T}} = \begin{bmatrix} \beta & \sigma^2 \end{bmatrix}</math>, the resulting [[Likelihood function|log-likelihood function]] is
Line 16:
: <math>\mathbf{\Delta}(\mathbf{\theta}) = \sum_{i=1}^n \left[ \frac{\partial^2 \ell(\mathbf{\theta}) }{ \partial \mathbf{\theta} \, \partial \mathbf{\theta}^{\mathsf{T}} } + \frac{\partial \ell(\mathbf{\theta}) }{ \partial \mathbf{\theta} } \frac{\partial \ell (\mathbf{\theta}) }{ \partial \mathbf{\theta} } \right]</math>
 
where <math>\mathbf{\Delta} (\mathbf{\theta})</math> is an <math>(r \times r) </math> [[random matrix]], where <math>r</math> is the number of parameters. White showed that the elements of <math>n^{-1/2} \mathbf{\Delta} ( \mathbf{\hat{\theta}} )</math>, where <math>\mathbf{\hat{\theta}}</math> is the MLE, are asymptotically [[Normal distribution|normally distributed]] with zero means when the model is correctly specified.<ref>{{cite book |first=L. G. |last=Godfrey |author-link=Leslie G. Godfrey |title=Misspecification Tests in Econometrics |publisher= [[Cambridge University Press]] |year=1988 |isbn=0-521-26616-5 |pages=35–37 |url=https://wwwbooks.google.com/books/edition/Misspecification_Tests_in_Econometrics/apXgcgoy7OgC?hlid=en&gbpv=1apXgcgoy7OgC&pg=PA35 }}</ref> In small samples, however, the test generally performs poorly.<ref>{{cite journal |first=Chris |last=Orme |title=The Small-Sample Performance of the Information-Matrix Test |journal=[[Journal of Econometrics]] |volume=46 |issue=3 |year=1990 |pages=309–331 |doi=10.1016/0304-4076(90)90012-I }}</ref>
 
== References ==
Line 22:
 
== Further reading ==
* {{cite book |firstfirst1=W. |lastlast1=Krämer |first2=H. |last2=Sonnberger |title=The Linear Regression Model Under Test |___location=Heidelberg |publisher=Physica-Verlag |year=1986 |isbn=3-7908-0356-1 |pages=105–110 |url=https://wwwbooks.google.com/books/edition/_/NSvqCAAAQBAJ?hlid=en&gbpv=1NSvqCAAAQBAJ&pg=PA105 }}
* {{cite book |first=Halbert |last=White |chapter=Information Matrix Testing |title=Estimation, Inference and Specification Analysis |___location=New York |publisher=Cambridge University Press |year=1994 |isbn=0-521-25280-6 |pages=300–344 |chapter-url=https://wwwbooks.google.com/books/edition/_/hnNpQSf7ZlAC?hlid=en&gbpv=1hnNpQSf7ZlAC&pg=PA300 }}
 
[[Category:Statistical tests]]