Returns the p-value of the regression stability test (i.e., whether the coefficients in two linear regressions on different data sets are equal).
Syntax
ChowTest(Y1, X1, Y2, X2, Mask, Intercept, Return_type)
- Y1
- is the response or the dependent variable data array of the first data set (a one-dimensional array of cells (e.g., rows or columns)).
- X1
- is the independent variables data matrix of the first data set, such that each column represents one variable.
- Y2
- is the response or the dependent variable data array of the second data set (a one-dimensional array of cells (e.g., rows or columns)).
- X2
- is the independent variables data matrix of the second data set, such that each column represents one variable.
- Mask
- is the boolean array to select a subset of the explanatory variables in the model. If missing, all variables in X are included.
- Intercept
- is the regression constant or the intercept value (e.g., zero). If missing, an intercept is not fixed and will be computed from the data set.
- Return_type
- is a switch to select the return output (1 = P-value (default), 2 = test statistics, 3 = standardized residuals).
Method Description 1 P-value. 2 Test statistics. 3 Standardized residuals.
Remarks
- The data sets may include missing values.
- The model errors ($\varepsilon$) are assumed to be independent and identically distributed from a normal distribution with unknown variance.
- Each column in the explanatory (predictor) matrix corresponds to a separate variable.
- Each row in the explanatory matrix and corresponding dependent vector corresponds to one observation.
- Observations (i.e., row) with missing values in X or Y are removed.
- The number of observations of each data set must be larger than the number of explanatory variables.
- In principle, the Chow test constructs the following regression models:
- Model 1 (Data set 1): $$y_t = \alpha_1 + \beta_{1,1}\times X_1 + \beta_{2,1}\times X_2 + \cdots + \epsilon$$
- Model 2 (Data set 2): $$y_t = \alpha_2 + \beta_{1,2}\times X_1 + \beta_{2,2}\times X_2+ \cdots + \epsilon$$
- Model 3 (Data sets 1 + 2): $$y_t = \alpha + \beta_1\times X_1 + \beta_2 \times X_2 + \cdots + \epsilon$$
- The Chow test hypothesis: $$ H_{o}= \left\{\begin{matrix} \alpha_1 = \alpha_2 = \alpha \\ \beta_{1,1} = \beta_{1,2} = \beta_1 \\ \beta_{2,1} = \beta_{2,2} = \beta_2 \end{matrix}\right. $$ $$H_{1}: \exists \alpha_i \neq \alpha, \exists \beta_{i,j} \neq \beta_i$$ Where:
- $H_{o}$ is the null hypothesis.
- $H_{1}$ is the alternate hypothesis.
- $\beta_{i,j}$ is the i-th coefficient in the j-th regression model ($j=1,2,3$).
- The Chow statistics are defined as follows: $$ \frac{(\textrm{SSE}_C -(\textrm{SSE}_1+\textrm{SSE}_2))/(k)}{(\textrm{SSE}_1+\textrm{SSE}_2)/(N_1+N_2-2k)}$$ Where:
- $\textrm{SSE}$ is the sum of the squared residuals.
- $K$ is the number of explanatory variables.
- $N_1$ is the number of non-missing observations in the first data set.
- $N_2$ is the number of non-missing observations in the second data set.
- The Chow test statistics follow an F-distribution with $k$, and $N_1+N_2-2\times K$ degrees of freedom.
- The ChowTest function is available starting with version 1.60 APACHE.
Files Examples
Related Links
References
- Chow, Gregory C. (1960). "Tests of Equality Between Sets of Coefficients in Two Linear Regressions". Econometrica 28 (3): 591–605.
Comments
Article is closed for comments.