Partial autocorrelation function: Difference between revisions

Content deleted Content added
Change image of ACF vs PACF with only PACF since ACF wasn't relevant to article text
Citation bot (talk | contribs)
Removed URL that duplicated identifier. | Use this bot. Report bugs. | #UCB_CommandLine
 
(23 intermediate revisions by 10 users not shown)
Line 1:
{{Short description|Partial correlation of a time series with its lagged values}}
[[File:Partial autocorrelation function.png|thumb|Partial autocorrelation function of [[Lake Huron]]'s depth with confidence interval (in blue, plotted around 0)]]
{{More footnotes|date=September 2011}}
[[File:Partial autocorrelation function.png|thumb|Partial autocorrelation function of [[Lake Huron]]'s depth<ref>{{cite book |last1=Brockwell |first1=Peter J. |last2=Davis |first2=Richard A. |title=Introduction to Time Series and Forecasting |date=2016 |publisher=Springer International Publishing |isbn=978-3319298528 |page=132 |edition=Third |url=https://doi.org/10.1007/978-3-319-29854-2 |language=English |chapter=Modeling and Forecasting with ARMA Processes}}</ref>]]
In [[time series analysis]], the '''partial autocorrelation function''' ('''PACF''') gives the [[partial correlation]] of a stationary time series with its own lagged values, regressed the values of the time series at all shorter lags. It contrasts with the [[autocorrelation function]], which does not control for other lags.
 
This function plays an important role in data analysis aimed at identifying the extent of the lag in an [[autoregressive model|autoregressive (AR) model]]. The use of this function was introduced as part of the [[Box–Jenkins]] approach to time series modelling, whereby plotting the partial autocorrelative functions one could determine the appropriate lags '''p''' in an AR ('''p''') [[autoregressive model|model]] or in an extended [[Autoregressive integrated moving average|ARIMA]] ('''p''','''d''','''q''') model.
 
==DescriptionDefinition==
 
Given a time series <math>z_t</math>, the partial autocorrelation of lag <math>k</math>, denoted <math>\phi_{kkk,k}</math>, is the [[autocorrelation]] between <math>z_t</math> and <math>z_{t+k}</math> with the linear dependence of <math>z_t</math> on <math>z_{t+1}</math> through <math>z_{t+k-1}</math> removed;. equivalentlyEquivalently, it is the autocorrelation between <math>z_t</math> and <math>z_{t+k}</math> that is not accounted for by lags <math>1</math> through <math>k-1</math>, inclusive.<ref name=":3">{{Cite web |title=6.4.4.6.3. Partial Autocorrelation Plot |url=https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4463.htm |access-date=2022-07-14 |website=www.itl.nist.gov}}</ref><math display="block">\phi_{1,1} = \operatorname{corr}(z_{t+1}, z_{t}),\text{ for }k= 1,</math><math display="block">\phi_{k,k} = \operatorname{corr}(z_{t+k} - \hat{z}_{t+k},\, z_{t} - \hat{z}_{t}),\text{ for }k\geq 2,</math>where <math>\hat{z}_{t+k}</math> and <math>\hat{z}_t</math> are [[linear combination]]s of <math>\{z_{t+1}, z_{t+2}, ..., z_{t+k-1}\}</math> that minimize the [[mean squared error]] of <math>z_{t+k}</math> and <math>z_t</math> respectively. For [[stationary process]]es, the coefficients in <math>\hat{z}_{t+k}</math> and <math>\hat{z}_t</math> are the same, but reversed:<ref name=":4">{{Cite book |last1=Shumway |first1=Robert H. |url=http://link.springer.com/10.1007/978-3-319-52452-8 |title=Time Series Analysis and Its Applications: With R Examples |last2=Stoffer |first2=David S. |date=2017 |publisher=Springer International Publishing |isbn=978-3-319-52451-1 |series=Springer Texts in Statistics |___location=Cham |pages=97–99 |language=en |doi=10.1007/978-3-319-52452-8}}</ref> <math display="block">\hat{z}_{t+k}=\beta_1z_{t+k-1}+\cdots+\beta_{k-1}z_{t+1}\qquad\text{and}\qquad\hat{z}_t=\beta_1z_{t+1}+\cdots+\beta_{k-1}z_{t+k-1}.</math><!-- Think of this as reversing the direction of time. Stationary processes are invariant under shifts and reversal of time. -->
 
== Calculation ==
<math display="block">\phi_{11} = \operatorname{corr}(z_{t+1}, z_{t}),\text{ for }k= 1,</math><math display="block">\phi_{kk} = \operatorname{corr}(z_{t+k} - \hat{z}_{t+k},\, z_{t} - \hat{z}_{t}),\text{ for }k\geq 2.</math>
 
whereThe theoretical partial autocorrelation function of a stationary time series can be calculated by using the Durbin–Levinson Algorithm:<math display="block">\hatphi_{z}_{t+kn,n} = \beta_1frac{\rho(n) z_- \sum_{t+k=1}^{n-1} + \beta_2 z_phi_{t+kn-21, k} +\rho(n ...- +k)}{1 - \beta_sum_{k=1}^{n-1} z_\phi_{t+n-1, k} \rho(k) }</math> is the [[linear combination]] ofwhere <math>\phi_{z_n,k} = \phi_{t+kn-1}, z_{t+k-2}, ...- \phi_{n,n} z_\phi_{t+n-1}\,n-k}</math> that minimizes the [[mean squared error]],for <math>1 \Epsilon[z_{t+leq k} \leq n - \hat{z}_{t+k}]^21</math>. Similarly,and <math>\hat{z}_trho(n)</math> =is \beta_1the z_autocorrelation function.<ref>{t+1}{Cite +journal \beta_2|last=Durbin z_{t+2}|first=J. +|date=1960 ...|title=The +Fitting \beta_{k-1}of z_{t+kTime-1}Series <Models |url=https:/math>/www.jstor.org/stable/1401322 is|journal=Revue ade linearl'Institut combinationInternational minimizingde <math>\Epsilon[z_tStatistique - \hat{z}_t]^2</math>. ForReview [[Stationaryof process|stationarythe processes]],International theStatistical coefficientsInstitute <math>\beta_1,|volume=28 \beta_2,|issue=3 |pages=233–244 |doi=10...,2307/1401322 \beta_{k|jstor=1401322 |issn=0373-1}1138|url-access=subscription }}</mathref> are the same.<ref>{{Cite book |lastlast1=Shumway |firstfirst1=Robert H. |url=http://link.springer.com/10.1007/978-3-319-52452-8 |title=Time Series Analysis and Its Applications: With R Examples |last2=Stoffer |first2=David S. |date=2017 |publisher=Springer International Publishing |isbn=978-3-319-52451-1 |series=Springer Texts in Statistics |___location=Cham |pages=97-98103–104 |language=en |doi=10.1007/978-3-319-52452-8}}</ref><ref name=":1">{{Cite book |last=Enders |first=Walter |title=Applied econometric time series |date=2004 |publisher=J. Wiley |isbn=0-471-23065-0 |edition=2nd |___location=Hoboken, NJ |pages=65–67 |language=en |oclc=52387978}}</ref>
 
ThereThe areformula algorithmsabove forcan estimatingbe used with sample autocorrelations to find the sample partial autocorrelation basedfunction onof theany samplegiven autocorrelationstime series.<ref name=":0">{{Cite book |lastlast1=Box |firstfirst1=George E. P. |title=Time Series Analysis: Forecasting and Control |last2=Reinsel |first2=Gregory C. |last3=Jenkins |first3=Gwilym M. |publisher=John Wiley |year=2008 |isbn=9780470272848 |edition=4th |___location=Hoboken, New Jersey |language=en}}</ref><ref>{{Cite book |lastlast1=Brockwell |firstfirst1=Peter J. |title=Time Series: Theory and Methods |last2=Davis |first2=Richard A. |publisher=Springer |year=1991 |isbn=9781441903198 |edition=2nd |___location=New York, NY |language=en}}</ref> One of these procedures is the [[Levinson-Durbin|Levinson–Durbin Algorithm]]. The partial autocorrelation of any time series can be calculated by iteratively solving for increasing lags in the following formula:<math display="block">\phi_{nn} = \frac{\rho(n) - \sum_{k=1}^{n-1} \phi_{n-1, k} \rho(n - k)}{1 - \sum_{k=1}^{n-1} \phi_{n-1, k} \rho(k) }</math>where <math>\phi_{nk} = \phi_{n-1, k} - \phi_{nn} \phi_{n-1,n-k}</math> for <math>1 \leq k \leq n - 1</math> and <math>\rho(n)</math> is the autocorrelation with lag <math>n</math>.<ref>{{Cite journal |last=Durbin |first=J. |date=1960 |title=The Fitting of Time-Series Models |url=https://www.jstor.org/stable/1401322 |journal=Revue de l'Institut International de Statistique / Review of the International Statistical Institute |volume=28 |issue=3 |pages=233–244 |doi=10.2307/1401322 |issn=0373-1138}}</ref><ref>{{Cite book |last=Shumway |first=Robert H. |url=http://link.springer.com/10.1007/978-3-319-52452-8 |title=Time Series Analysis and Its Applications: With R Examples |last2=Stoffer |first2=David S. |date=2017 |publisher=Springer International Publishing |isbn=978-3-319-52451-1 |series=Springer Texts in Statistics |___location=Cham |pages=103-104 |language=en |doi=10.1007/978-3-319-52452-8}}</ref><ref>{{Cite book |last=Enders |first=Walter |url=https://www.worldcat.org/oclc/52387978 |title=Applied econometric time series |date=2004 |publisher=J. Wiley |isbn=0-471-23065-0 |edition=2nd |___location=Hoboken102, NJ |pages=65-67243–245 |language=en |oclc=52387978}}</ref>
[[File:Partial Autocorrelation Function Graph.png|alt=The partial autocorrelation graph has 3 spikes and the rest is close to 0.|thumb|PACF of an AR(3) time series]]
Partial autocorrelation plots are a commonly used tool for identifying the order of an [[autoregressive model]].<ref name=":0" /> The partial autocorrelation of an AR(''p'') process is zero at lag <math>p+1</math> and greater. If the sample autocorrelation plot indicates that an AR model may be appropriate, then the sample partial autocorrelation plot is examined to help identify the order. One looks for the point on the plot where the partial autocorrelations for all higher lags are essentially zero. Placing on the plot an indication of the sampling uncertainty of the sample PACF is helpful for this purpose: this is usually constructed on the basis that the true value of the PACF, at any given positive lag, is zero. This can be formalised as described below.
 
== Examples ==
An approximate test that a given partial correlation is zero (at a 5% [[significance level]]) is given by comparing the sample partial autocorrelations against the critical region with upper and lower limits given by <math>\pm 1.96/\sqrt{n}</math>, where ''n'' is the record length (number of points) of the time-series being analysed. This approximation relies on the assumption that the record length is at least moderately large (say <math>n>30</math>) and that the underlying process has finite second moment.
 
The following table summarizes the partial autocorrelation function of different models:<ref name=":1" /><ref name=":2">{{Cite book |last=Das |first=Panchanan |title=Econometrics in Theory and Practice : Analysis of Cross Section, Time Series and Panel Data with Stata 15. 1 |publisher=Springer |year=2019 |isbn=978-981-329-019-8 |edition= |___location=Singapore |pages=294–299 |language=en |oclc=1119630068}}</ref>
{| class="wikitable"
!Model
!PACF
|-
|[[White noise]]
|The partial autocorrelation is 0 for all lags.
|-
|[[Autoregressive model]]
|The partial autocorrelation for an AR(''p'') model is nonzero for lags less than or equal to ''p'' and 0 for lags greater than ''p''.
|-
|rowspan=2|[[Moving-average model]]
|If <math>\phi_{1,1} > 0</math>, the partial autocorrelation [[Oscillation (mathematics)|oscillates]] to 0.
|-
|If <math>\phi_{1,1} < 0</math>, the partial autocorrelation [[Exponential decay|geometrically]] decays to 0.
|-
|[[Autoregressive–moving-average model]]
|An ARMA(''p'', ''q'') model's partial autocorrelation geometrically decays to 0 but only after lags greater than ''p''.
|}
 
The behavior of the partial autocorrelation function mirrors that of the autocorrelation function for autoregressive and moving-average models. For example, the partial autocorrelation function of an AR(''p'') series cuts off after lag ''p'' similar to the autocorrelation function of an MA(''q'') series with lag ''q''. In addition, the autocorrelation function of an AR(''p'') process tails off just like the partial autocorrelation function of an MA(''q'') process.<ref name=":4" />
 
== Autoregressive model identification ==
 
[[File:Partial Autocorrelation Function Graph.png|alt=The partial autocorrelation graph has 3 spikes and the rest is close to 0.|thumb|PACFSample partial autocorrelation function with confidence interval of ana simulated AR(3) time series]]
 
Partial autocorrelation is a commonly used tool for identifying the order of an autoregressive model.<ref name=":0" /> As previously mentioned, the partial autocorrelation of an AR(''p'') process is zero at lags greater than ''p''.<ref name=":1" /><ref name=":2" /> If an AR model is determined to be appropriate, then the sample partial autocorrelation plot is examined to help identify the order.
 
The partial autocorrelation of lags greater than ''p'' for an AR(''p'') time series are approximately independent and [[Normal distribution|normal]] with a [[mean]] of 0.<ref>{{Cite journal |last=Quenouille |first=M. H. |date=1949 |title=Approximate Tests of Correlation in Time-Series |url=https://onlinelibrary.wiley.com/doi/10.1111/j.2517-6161.1949.tb00023.x |journal=Journal of the Royal Statistical Society, Series B (Methodological) |language=en |volume=11 |issue=1 |pages=68–84 |doi=10.1111/j.2517-6161.1949.tb00023.x|url-access=subscription }}</ref> Therefore, a [[confidence interval]] can be constructed by dividing a selected [[Standard score|z-score]] by <math>\sqrt{n}</math>. Lags with partial autocorrelations outside of the confidence interval indicate that the AR model's order is likely greater than or equal to the lag. Plotting the partial autocorrelation function and drawing the lines of the confidence interval is a common way to analyze the order of an AR model. To evaluate the order, one examines the plot to find the lag after which the partial autocorrelations are all within the confidence interval. This lag is determined to likely be the AR model's order.<ref name=":3" />
 
==References==
{{Reflist}}
{{NIST-PD|http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4463.htm}}
 
{{Statistics|analysis}}