Error correction model: Difference between revisions

Content deleted Content added
No edit summary
Line 10:
Several methods are known in the literature for estimating a refined dynamic model as described above. Among these are the Engel and Granger 2-step approach, estimating their ECM in one step and the vector-based VECM using [[Johansen test|Johansen's method]].
 
===Engel and Granger 2-Stepstep Approachapproach===
The first step of this method is to pretest the individual time series one uses in order to confirm that they are [[Stationary process|non-stationary]] in the first place. This can be done by standard [[unit root]] testing such as [[Augmented Dickey–Fuller test]].
Take the case of two different series <math>x_t</math> and <math>y_t</math>. If both are I(0), standard regression analysis will be valid. If they are integrated of a different order, e.g. one being I(1) and the other being I(0), one has to transform the model.
 
If they are both integrated to the same order (commonly I(1)), we can estimate an ECM model of the form: <math> A(L) \Delta y_t = \gamma + B(L)\Delta x_t + \alpha (y_t -\beta_0 - \beta_1 x_t ) + \nu_t </math>.
''If'' both variables are integrated and this ECM exists, they are cointegrated by the Engle-Granger representation theorem.
 
The: second<math> stepA(L) is\, then\Delta toy_t estimate= the\gamma model+ usingB(L) [[Ordinary\, least\Delta squares]]:x_t <math>+ \alpha (y_t = -\beta_0 +- \beta_1 x_t ) + \epsilon_tnu_t. </math>
 
''If'' both variables are integrated and this ECM exists, they are cointegrated by the Engle-GrangerEngle–Granger representation theorem.
 
The second step is then to estimate the model using [[ordinary least squares]]: <math> y_t = \beta_0 + \beta_1 x_t + \varepsilon_t </math>
If the regression is not spurious as determined by test criteria described above, [[Ordinary least squares]] will not only be valid, but in fact super [[consistent estimator|consistent]] (Stock, 1987).
Then the predicted residuals <math>\hat{\epsilon_tvarepsilon_t}= y_t -\beta_0 - \beta_1 x_t </math> from this regression are saved and used in a regression of differenced variables plus a lagged error term

: <math> A(L) \, \Delta y_t = \gamma + B(L) \, \Delta x_t + \alpha \hat{\epsilon_varepsilon_{t-1}} + \nu_t. </math>.
 
One can then test for cointegration using a standard [[t-statistic]] on <math>\alpha</math>.
While this approach is easy to apply, there are, however numerous problems:
Line 31 ⟶ 37:
 
===VECM===
The Engle-GrangerEngle–Granger approach as described above suffers from a number of weaknesses. Namely it is restricted to only a single equation with one variable designated as the dependent variable, explained by another variable that is assumed to be weakly exogeneous for the parameters of interest. It also relies on pretesting the time series to find out whether variables are I(0) or I(1). These weaknesses can be addressed through the use of Johansen's procedure. Its advantages include that pretesting is not necessary, there can be numerous cointegrating relationships, all variables are treated as endogenous and tests relating to the long-run parameters are possible. The resulting model is known as a vector error correction model (VECM), as it adds error correction features to a multi-factor model known as [[vector autoregression]] (VAR). The procedure is done as follows:
 
* Step 1: estimate an unrestricted VAR involving potentially non-stationary variables
Line 38 ⟶ 44:
 
===An example of ECM===
The idea of cointegration may be demonstrated in a simple macroeconomic setting. Suppose, consumption <math>C_t</math> and disposable income <math>Y_t</math> are macroeconomic time series that are related in the long run (see [[Permanent income hypothesis]]). Specifically, let [[average propensity to consume]] be 90%, that is, in the long run <math>C_t = 0.9 Y_t</math>. From the econometrician's point of view, this long run relationship (aka cointegration) exists if errors from the regression <math>C_t = \beta Y_t+\epsilon_tvarepsilon_t</math> are a [[Stationary process|stationary]] series, although <math>Y_t</math> and <math>C_t</math> are non-stationary. Suppose also that if <math>Y_t</math> suddenly changes by <math>\Delta Y_t</math>, then <math>C_t</math> changes by <math>\Delta C_t = 0.5 \Delta Y_t</math>, that is, [[marginal propensity to consume]] equals 50%. Our last assumption is that the gap between current and equilibrium consumption decreases each period by 20%.
 
In this setting a change <math>\Delta C_t = C_t - C_{t-1}</math> in consumption level can be modelled as <math>\Delta C_t = 0.5 \Delta Y_t - 0.2 (C_{t-1}-0.9 Y_{t-1}) +\epsilon_tvarepsilon_t</math>. The first term in the RHS describes short-run impact of change in <math>Y_t</math> on <math>C_t</math>, the second term explains long-run gravitation towards the equilibrium relationship between the variables, and the third term reflects random shocks that the system receives (e.g. shocks of consumer confidence that affect consumption). To see how the model works, consider two kinds of shocks: permanent and transitory (temporary). For simplicity, let <math>\epsilon_tvarepsilon_t</math> be zero for all t. Suppose in period ''t-''&nbsp;−&nbsp;1 the system is in equilibrium, i.e. <math>C_{t-1} = 0.9 Y_{t-1}</math>. Suppose that in the period t <math>Y_t</math> increases by 10 and then returns to its previous level. Then <math>C_t</math> first (in period t) increases by 5 (half of 10), but after the second period <math>C_t</math> begins to decrease and converges to its initial level. In contrast, if the shock to <math>Y_t</math> is permanent, then <math>C_t</math> slowly converges to a value that exceeds the initial <math>C_{t-1}</math> by &nbsp;9.
 
This structure is common to all ECM models. In practice, econometricians often first estimate the cointegration relationship (equation in levels), and then insert it into the main model (equation in differences).
Line 54 ⟶ 60:
*{{cite journal|last1=Phillips|first1=Peter C.B.|title=Understanding Spurious Regressions in Econometrics|journal=Cowles Foundation Discussion Papers 757|date=1985|url=http://cowles.yale.edu/sites/default/files/files/pub/d07/d0757.pdf|publisher=Cowles Foundation for Research in Economics, Yale University}}
* Sargan, J. D. (1964). "Wages and Prices in the United Kingdom: A Study in Econometric Methodology", 16, 25–54. in ''Econometric Analysis for National Economic Planning'', ed. by P. E. Hart, G. Mills, and J. N. Whittaker. London: Butterworths
*{{cite journal|last1=Yule|first1=Georges Udny|title=Why do we sometimes get nonsense correlations between time series?- A study in sampling and the nature of time-series|journal=Journal of the Royal Statistical Society|date=1926|volume=89|issue=1|pages=1–63|jstor=2341482 }}
 
[[Category:Error detection and correction]]