The auto-regressive moving average (ARMA) is a stationary stochastic process made up of sums of auto-regressive and moving average components. Alternatively, in a simple formulation for an ARMA (p, q):$$x_t -\phi_o - \phi_1 x_{t-1}-\phi_2 x_{t-2}-\cdots -\phi_p x_{t-p}=a_t + \theta_1 a_{t-1} + \theta_2 a_{t-2} + \cdots + \theta_q a_{t-q}$$
Where:
- $x_t$ is the observed output at time $t$.
- $a_t$ is the innovation, shock, or error term at time $t$.
- $p$ is the order of the last lagged variables.
- $q$ is the order of the last lagged innovation or shock.
- $a_t$ time series observations are independent and identically distributed (i.e., $i.i.d$) and follow a Gaussian distribution (i.e., $\Phi(0,\sigma^2)$).
Using back-shift notations (i.e., $L$), we can express the ARMA process as follows:$$(1-\phi_1 L - \phi_2 L^2 -\cdots - \phi_p L^p) x_t - \phi_o= (1+\theta_1 L+\theta_2 L^2 + \cdots + \theta_q L^q)a_t$$
Assuming $y_t$ is stationary with a long-run mean of $\mu$, then taking the expectation from both sides, we can express $\phi_o$ as follows:$$\phi_o = (1-\phi_1 -\phi_2 - \cdots - \phi_p)\mu$$
Thus, the ARMA (p, q) process can now be expressed as:$$(1-\phi_1 L - \phi_2 L^2 -\cdots - \phi_p L^p) (x_t - \mu)= (1+\theta_1 L+\theta_2 L^2 + \cdots + \theta_q L^q)a_t$$ $$z_t = x_t - \mu$$ $$(1-\phi_1 L - \phi_2 L^2 -\cdots - \phi_p L^p) z_t = (1+\theta_1 L+\theta_2 L^2 + \cdots + \theta_q L^q)a_t$$
In sum, $z_t$ is the original signal after we subtract its long-run average.
Remarks
- The variance of the shocks is constant or time-invariant.
- The order of an AR component process is solely determined by the order of the last lagged auto-regressive variable with a non-zero coefficient (i.e., $w_{t-p}$).
- The order of an MA component process is solely determined by the order of the last moving average variable with a non-zero coefficient (i.e., $a_{t-q}$).
- In principle, you can have fewer parameters than the orders of the model. Consider the following ARMA (12, 2) process (Airline Model):$$(1-\phi_1 L -\phi_{12} L^{12} )(y_t - \mu) = (1+\theta L^2)a_t$$
Related Links
References
- D. S.G. Pollock; Handbook of Time Series Analysis, Signal Processing, and Dynamics; Academic Press; Har/Cdr edition(Nov 17, 1999), ISBN: 125609906.
- James Douglas Hamilton; Time Series Analysis, Princeton University Press; 1st edition(Jan 11, 1994), ISBN: 691042896.
- Tsay, Ruey S.; Analysis of Financial Time Series, John Wiley & SONS; 2nd edition(Aug 30, 2005), ISBN: 0-471-690740.
- Box, Jenkins and Reinsel; Time Series Analysis: Forecasting and Control; John Wiley & SONS.; 4th edition(Jun 30, 2008), ISBN: 470272848.
- Walter Enders; Applied Econometric Time Series; Wiley; 4th edition(Nov 03, 2014), ISBN: 1118808568.
Comments
Article is closed for comments.