Appendix B: Akaike-Information Criterion (AIC)

The Akaike information criterion is a measure of the goodness of fit of a statistical model. It can be said to describe the trade-off between bias and variance in model construction, or loosely speaking that of accuracy and complexity of the model.

The AIC is not a test of the model in the sense of hypothesis testing; rather, it provides a means for comparison among models—a tool for model selection. Given a data set, several candidate models may be ranked according to their AIC, with the model having the minimum AIC being the best. From the AIC values one may also infer that e.g. the top two models are roughly in a tie and the rest are far worse.

  1. In general, the AIC is defined as: $$\mathit{AIC}=2k-2\times\ln(L)$$ Where:
    • $k$ is the number of model parameters.
    • $\ln(L)$ is the log-likelihood function for the statistical model.
  2. For smaller data sets, the AICc applies 2nd order correction: $$ \mathit{AICc}= \mathit{AIC} + \frac{2k(k+1)}{N-k-1} = \frac{2\times N \times k}{N-k-1}-2\times\ln(L) $$Where:
    • $N$ is the data sample size.
    • $k$ is the number of model parameters.


Remarks

  1. The AIC is not a test on the model in the sense of hypothesis testing, rather it is a test between models - a tool for model selection

 

References

Comments

Article is closed for comments.

Was this article helpful?
9 out of 14 found this helpful