The **significance level** (denoted as \(\function\alpha\)) in hypothesis testing is a threshold used to determine whether we reject the null hypothesis. It defines the probability of making a Type I error, which is incorrectly rejecting H0 when it is actually true. Common values for the significance level are 0.05, 0.01, and 0.10.
Here’s how it works:
During a test, we calculate a number called the **p-value**. This value measures the probability of obtaining the observed data, or something more extreme, if H0 is true. We then compare this p-value to \(\function\alpha\):
- If the p-value \(< \function\alpha\), we reject H0.
- If the p-value \(>= \function\alpha\), we fail to reject H0.
For example, if \(\function\alpha\) is set at 0.05 and our p-value comes out to be 0.03, we would reject H0 because 0.03 < 0.05.
The chosen significance level influences both the stringency of the test and the risk of Type I error. A smaller \(\function\alpha\) makes it harder to reject H0, reducing the likelihood of a Type I error but potentially increasing the likelihood of a Type II error (failing to reject a false H0).