In principle, an ARMAX model is a linear regression model that uses an ARMA-type process (i.e., $w_t$) to model residuals:$$y_t = \alpha_o + \beta_1 x_{1,t} + \beta_2 x_{2,t} + \cdots + \beta_b x_{b,t} + w_t$$ $$(1-\phi_1 L - \phi_2L^2-\cdots-\phi_pL^p)(y_t-\alpha_o -\beta_1 x_{1,t} - \beta_2 x_{2,t} - \cdots - \beta_b x_{b,t})=(1+ \theta_1 L + \theta_2 L^2 + \cdots + \theta_q L^q)a_t$$ $$(1-\phi_1 L - \phi_2 L^2 - \cdots - \phi_p L^p)w_t=(1+\theta_1 L+ \theta_2 L^2 + \cdots + \theta_q L^q ) a_t$$ $$a_t \sim i.i.d \sim \Phi (0,\sigma^2)$$
Where:
- $L$ is the lag (aka back-shift) operator.
- $y_t$ is the observed output at time $t$.
- $x_{k,t}$ is the k-th exogenous input variable at time $t$.
- $\beta_k$ is the coefficient value for the k-th exogenous (explanatory) input variable.
- $b$ is the number of exogenous input variables.
- $w_t$ is the auto-correlated regression residuals.
- $p$ is the order of the last lagged variables.
- $q$ is the order of the last lagged innovation or shock.
- $a_t$ is the innovation, shock, or error term at time $t$.
- $\{a_t\}$ time series observations are independent and identically distributed (i.e., $i.i.d$) and follow a Gaussian distribution (i.e., $\Phi(0,\sigma^2)$).
Assuming $y_t$ and all exogenous input variables are stationary, then taking the expectation from both sides, we can express $\alpha_o$ as follows:$$\alpha_o = \mu - \sum_{i=1}^b {\beta_i E[x_i] }= \mu - \sum_{i=1}^b {\beta_i \bar{x_i} }$$
Where:
- $\bar x_k$ is the long-run average of the i-th exogenous input variable.
In the event that $y_t$ is not stationary, then one must verify that: (a) one or more variables in $\{x_1,x_2,\cdots,x_b\}$ is not stationary and (b) the time series variables in $\{y, x_1,x_2,\cdots,x_b\}$ are cointegrated, so there is at least one linear combination of those variables that yields a stationary process (i.e., ARMA).
Remarks
- The variance of the shocks is constant or time-invariant.
- The order of an AR component process is solely determined by the order of the last lagged auto-regressive variable with a non-zero coefficient (i.e., $w_{t-p}$).
- The order of an MA component process is solely determined by the order of the last moving average variable with a non-zero coefficient (i.e., $a_{t-q}$).
- In principle, you can have fewer parameters than the orders of the model. Consider the following ARMA (12, 2) process:$$(1-\phi_1 L -\phi_{12} L^{12} )(y_t - \mu) = (1+\theta L^2)a_t$$
Files Examples
Related Links
References
- James Douglas Hamilton; Time Series Analysis, Princeton University Press; 1st edition(Jan 11, 1994), ISBN: 691042896.
- Tsay, Ruey S.; Analysis of Financial Time Series, John Wiley & SONS; 2nd edition(Aug 30, 2005), ISBN: 0-471-690740.
Comments
Article is closed for comments.