The likelihood function (aka likelihood/LLF) is a function of the parameters of a statistical model. In other words, the likelihood of model parameters given some observed outcome (i.e. sample) is equal to the probability of those outcomes given the model and its parameters values.

In non-technical parlance, "likelihood" is usually a synonym for "probability" but in statistical usage, a clear technical distinction is made.

For many applications involving likelihood functions, it is more convenient to work in terms of the natural logarithm of the likelihood function, called the log-likelihood, than in terms of the likelihood function itself. Because the logarithm is a monotonically increasing function, the logarithm of a function achieves its maximum value at the same points as the function itself, and hence the log-likelihood can be used in place of the likelihood in maximum likelihood estimation and related techniques.

By definition, the likelihood function for a statistical model is described as:

$$ L^* = \prod_{n = 1}^N {f\left( {y_n \left| {y_{n-1},y_{n-2},...,y_1 ,\theta_1, \theta_2,...,\theta_k } \right.} \right)} $$ **- OR -** $$ \ln L^* = \sum_{n = 1}^N {\ln f\left( {y_n \left| {y_{n-1},y_{n-2},...,y_1 ,\theta_1, \theta_2,...,\theta_k } \right.} \right)} $$ Where:

- $L^*$ is the likelihood function.
- $\ln L^*$ is the log-likelihood function.
- $f\left( \right)$ is the conditional probability density function.
- $\ln f\left( \right)$ is the natural log of the conditional probability density function.
- $y_n$ is the value of the time series at time n.
- $y_{n-1},y_{n-2},...,y_1$ are the past values of the time series at time n.
- $\theta_1, \theta_2,...,\theta_k$ are the parameters of the statistical model.

**Remarks**

- In case of normal or Guassian probability density function, the log-likelihood function can be simplified as follow:

$$\ln L^* = \ln L(\mu \,, \sigma \mid Y_1\,,Y_2...Y_N) = -\frac{N}{2}\ln(2\times \pi)- N\times \ln\sigma - \sum_{i=1}^{N}\frac{(Y_i-\mu)^2}{2\times\sigma^2} $$

- $\mu$ is the normal distribution mean
- $\sigma$ is the standard deviation of the underlying distribution
- $N$ is the number of observed values in the sample

## References

- D. S.G. Pollock; Handbook of Time Series Analysis, Signal Processing, and Dynamics; Academic Press; Har/Cdr edition(Nov 17, 1999), ISBN: 125609906
- James Douglas Hamilton; Time Series Analysis; Princeton University Press; 1st edition(Jan 11, 1994), ISBN: 691042896
- Tsay, Ruey S.; Analysis of Financial Time Series; John Wiley & SONS; 2nd edition(Aug 30, 2005), ISBN: 0-471-690740
- Box, Jenkins and Reisel; Time Series Analysis: Forecasting and Control; John Wiley & SONS.; 4th edition(Jun 30, 2008), ISBN: 470272848
- Walter Enders; Applied Econometric Time Series; Wiley; 4th edition(Nov 03, 2014), ISBN: 1118808568

## 0 Comments