Type II error

Steve Simon


This page is currently being updated from the earlier version of my website. Sorry that it is not yet fully available.

Dear Professor Mean, A journal reviewer criticized the small sample size in my research study and suggested that I mention a Type II error as a possible explanation for my results. I’ve never heard this term before. What is a Type II error?

In your research, you specified a null hypothesis and an alternative hypothesis. Typically, the null hypothesis corresponds to no change.

When you are using Statistics to decide between these two hypothesis, you have to allow for the possibility of error. Actually, if you are using any other procedure, you should still allow for the possibility of error, but we statisticians are the only ones honest enough to admit this. Here are the two types of errors:

The null hypothesis traditionally represents a negative finding (i.e., there is no difference between the treatment and control). You should always remember that it is impossible to prove a negative. Some statisticians will emphasize this fact by using the phrase “fail to reject the null hypothesis” in place of “accept the null hypothesis.” The former phrase always strikes me as semantic overkill.


Consider a new drug that we will put on the market if we can show that it is better than a placebo.

Suppose we are comparing two groups of patients, one with a possibly dangerous exposure (e.g., non-ionizing radiation), and the other unexposed.

Many studies have small sample sizes that make it difficult to reject the null hypothesis even when there is a big change in the data. In these situations, a Type II error might be a possible explanation for the negative study results.

You can find an earlier version of this page on my original website.