Modeling Cycles: Quick Reference (FRM Part 1)
Posted On:
1. Context
Here is a quick reference sheet for formulation and properties of various models used for modeling cycles. The essence of the reading is best captured through a reference sheet that summarizes (and helps quickly compare) the properties of these models and it hopefully might come in handy in the days before the exam. Here, we only deal with higher order (generic) versions of the respective models since analogous results for first order can easily be obtained by substituting orders of $p=1$ and/or $q=1$ and taking only requisite number of terms in the model formulation. For referencing the prescribed reading, the details are given below:
| Area | Quantitative Analysis |
| Reading | Modeling Cycles: MA, AR and ARMA models. |
| Reference | Diebold, Francis X. Ch 8:Modeling Cycles: MA, AR and ARMA models. In Elements of Forecasting, 4th Edition, Mason, Ohio: Cengage Learning, 2006. |
2. Moving Average: $q$ Order (MA($q$))
| Formulation | $$ {{y}_{t}}={{\varepsilon }_{t}}+{{\theta}_{1}}{{\varepsilon }_{t-1}}+\cdots +{{\theta }_{q}}{{\varepsilon }_{t-q}}=\Theta \left( L \right){{\varepsilon }_{t}} $$ where $ {{\varepsilon }_{t}}\tilde{\ }WN\left( 0,{{\sigma }^{2}} \right)$ and the $q$ order lag polynomial $\Theta(L)$ is given by: $$\Theta \left( L \right)=1+{{\theta }_{1}}L+\cdots +{{\theta }_{q}}{{L}^{q}}$$ |
| Mean | $0$ |
| Variance | $$\sigma^{2}\left(1+\theta_{1}^{2}+ \cdots +\theta_{q}^{2} \right)=\sigma ^2 \sum\nolimits_{i=0}^{q}\theta_{i}^{2}; \theta_0=1$$ |
| Conditional Mean | $$E(y_t|\Omega_{t-1})={{\theta }_{1}}{{\varepsilon }_{t-1}}+{{\theta }_{2}}{{\varepsilon }_{t-2}}+\cdots +{{\theta }_{q}}{{\varepsilon }_{t-q}}$$Time-varying and adapted to current information. |
| Conditional Variance | $\sigma^2$ (Constant) |
| Autocovariances | $$\gamma \left( \tau \right)=\left\{ \begin{matrix} {{\sigma }^{2}}\sum\limits_{i=0}^{q-\tau }{{{\theta }_{i}}{{\theta }_{\tau +i}},\mbox{ if }\tau \le q\left({{\theta }_{0}}=1 \right)} \\ 0,\mbox{ if }\tau >q \\ \end{matrix} \right.$$ There is a cut-off point of displacement ($\tau=q$) beyond which autocovariances fall off to zero. |
| Autocorrelation | $$\rho \left( \tau \right)=\left\{ \begin{matrix} \frac{\sum\limits_{i=0}^{q=\tau }{{{\theta }_{i}}{{\theta }_{\tau +i}}}}{\sum\limits_{i=0}^{q}{\theta _{i}^{2}}},\mbox{ if }\tau \le q\left( {{\theta }_{0}}=1 \right) \\ 0,\mbox{ if }\tau >q \\ \end{matrix} \right.$$ Like autocovariances, autocorrelations fall off to zero for ($\tau > q$). |
| Partial Autocorrelations | $p(\tau)$ can be directly obtained from coefficients of autoregression form. $p(\tau)$ is not zero for all $\tau$. It exists if the model is invertible and decays gradually with $\tau$, the decay being oscillatory or one-sided depending on values of $q$ and $\theta_i$. |
| Covariance Stationarity | Always |
| Invertibility | If inverse of all roots of $\Theta(L)$ lie in unit circle. |
3. AutoRegressive: $p$ Order (AR($p$))
| Formulation | $${{y}_{t}}={{\phi }_{1}}{{y}_{t-1}}+{{\phi }_{2}}{{y}_{t-2}}+\cdots +{{\phi }_{p}}{{y}_{t-p}}+{{\varepsilon }_{t}}$$ where ${{\varepsilon }_{t}}\tilde{\ }WN(0,{{\sigma }^{2}})$ which can be expressed using a lag operator polynomial as $$\Phi \left( L \right){{y}_{t}}=\left( 1-{{\phi }_{1}}L-{{\phi }_{2}}{{L}^{2}}-\cdots -{{\phi }_{p}}{{L}^{p}} \right){{y}_{t}}={{\varepsilon }_{t}}$$ |
| Mean | $0$ |
| Variance | For AR(1), we have: $$\gamma(0)=\frac{\sigma^2}{1-\phi^2}$$ which requires $|\phi|<1$ for it to be positive. |
| Conditional Mean | Time-varying and adapted to current information. |
| Conditional Variance | $\sigma^2$ (Constant) |
| Autocovariances | For AR(1), autocovariances follow the recursive Yule Walker $\left(\gamma(\tau)=\phi\gamma(\tau)\right)$ which yields: $$\gamma(\tau)=\phi^\tau\frac{\sigma^2}{1-\phi^2}$$ i.e. autocovariance decays with $\tau$. |
| Autocorrelation | $\rho(\tau)$ decays gradually with displacement $\tau$ (monotonically or oscillating). $\rho(\tau)$ vs $\tau$ graph in AR(p) oscillates if roots of $\phi(L)$ are complex. It shows richer patterns (depending on choice of $p$ and $\phi_i$). |
| Partial Autocorrelations | $p(\tau)$ fall off to zero at ($\tau > p$). $p(\tau)$ follows: $$ p(\tau)=\left\{ \begin{matrix} \phi_\tau \mbox{, if }\tau \le p \\ 0\mbox{, if }\tau >p \\ \end{matrix} \right. \\ $$ |
| Covariance Stationarity | If inverse of all roots of $\Phi(L)$ lie in unit circle. A necessary condition for this to happen is given by $\sum\limits_{i=1}^{p}\phi_{i}<1$. |
| Invertibility | By design, since already in autoregressive form. |
4. AutoRegressive Moving Average: (ARMA($p,q$))
| Formulation | $$y_t=\phi_{1}y_{t-1}+\cdots+\phi_{p}y_{t-p}+\\ \varepsilon_{t}+\theta_{1}\varepsilon_{t-1}+\cdots+\theta_{q}\varepsilon_{t-q}$$ In terms of lag polynomials, we have $$\Phi(L)y_t=\Theta(L)\varepsilon$$ where, the polynomials are given by: $$ \begin{align} & \Phi \left( L \right)=1-{{\phi }_{1}}L-{{\phi }_{2}}{{L}^{2}}-\cdots -{{\Phi }_{p}}{{L}^{p}} \\ & \Theta \left( L \right)=1+{{\theta }_{1}}L+{{\theta }_{2}}{{L}^{2}}+\cdots +{{\theta }_{q}}{{L}^{q}} \\ \end{align} $$ |
| Mean | $0$ |
| Conditional Mean | Time-varying and adapted to current information. |
| Conditional Variance | $\sigma^2$ (Constant) |
| Autocorrelation | $\rho(\tau)$ doesn’t cut off to zero, has many non-null coefficients. They do damp gradually, depending on choice of $p,q$ and coefficients. |
| Partial Autocorrelations | $p(\tau)$ doesn’t cut off to zero, has many non-null coefficients. They do damp gradually, depending on choice of $p,q$ and coefficients. |
| Covariance Stationarity | Decided by AR component. Covariance Stationary if inverse of all roots of $\Phi(L)$ lie in unit circle. If stationary, can be expressed in pure MA (convergent) form as: $$y_t=\frac{\Theta(L)}{\Phi(L)}\varepsilon_t$$ |
| Invertibility | Decided by MA component. Invertible if inverse of all roots of $\Theta(L)$ lie in unit circle. If invertible, can be expressed in pure AR form as: $$\varepsilon_t=\frac{\Phi(L)}{\Theta(L)}y_t$$ |