# Statistical Testing

Statistical/Hypothesis testing is a common method of drawing inferences about a population based on statistical evidence from a sample.

All hypothesis tests share the same basic terminology and structure:

1. Null Hypothesis is an assertion about a population that you would like to test. The null hypothesis may be formalized by asserting that a population parameter, or a combination of population parameters, has a certain value. For instance, to test the mean of a given population equals to $\mu_o$, we form the hypothesis as follow: $$H_o: \mu = \mu_o$$
2. Alternative Hypothesis is a contrasting assertion about the population that can be tested against the null hypothesis. For our example above, we construct the following alternatives:
1. $H_1: \mu \gt \mu_o -$ population average is greater than $\mu_o$ (right-tail test)
2. $H_1: \mu \lt \mu_o -$ population average is smaller than $\mu_o$ (right-tail test)
3. $H_1: \mu \neq \mu_o -$ population average is different from $\mu_o$ (two-tailed test)
3. The p-value of a test is the probability, under the null hypothesis, of obtaining a value of the test statistic as extreme or more extreme than the value computed from the sample.
4. The significance level of a test is a threshold of probability $\alpha$ agreed to before the test is conducted. A typical value of $\alpha$ is 5%. If the p-value of a test is less than $\alpha$, the test rejects the null hypothesis. If the p-value is greater than $\alpha$, there is insufficient evidence to reject the null hypothesis. Note that the lack of evidence for rejecting the null hypothesis is not evidence for accepting the null hypothesis. Also, note that the substantive "significance" of an alternative cannot be inferred from the statistical significance of a test.

Results of hypothesis tests are often communicated with a confidence interval. A confidence interval is an estimated range of values with a specified probability of containing the true population value of a parameter. Upper and lower bounds for confidence intervals are computed from the sample estimate of the parameter and the known (or assumed) sampling distribution of the estimator.