- Science Terms
- Parameter vs. Statistic
- Reoccurring vs. Recurring
- Linear vs. Nonlinear
- Observational Study vs. Experiment
- Histogram vs. Bar Graph
- Discrete vs. Continuous
- Validity vs. Reliability
- Type 1 vs. Type 2 Error
- Objective vs. Subjective Data
- Prospective vs. Retrospective Study
- Sample vs. Population
- Interpolation vs. Extrapolation
- Exogenous vs. Endogenous
Find a Job You Really Want In
Type one errors and type two errors are both statistical terms that denote coming to the wrong conclusion based on misinterpreting the data. Both stem out of hypothesis testing and how the basis of that is formed.
In order to understand what exactly makes a type one error or a type two error, you have to understand the basis of hypothesis testing. A null hypothesis is an assumption that the observed results won’t vary from the norm. This means, in non-math speak, that whatever you’re adding won’t have any noticeable effect on your observations.
For example, if you want to study whether or not aspirin really does lower the risk of heart attacks, you have a group of people taking aspirin and a separate group of people not taking aspirin. The null hypothesis would be that the group taking aspirin wouldn’t have a noticeably lower chance of a heart attack than those who aren’t.
The alpha hypothesis, on the other hand, would be that taking aspirin does lower your risk of heart attack and that the data will show an observable, noticeable difference.
So, that brings us back to type one errors and type two errors. A type one error is rejecting a true null hypothesis, and a type two error is failing to reject a false null hypothesis. So, a type one error is a false positive, while a type two error is a false negative.
Key Takeaways:
| Type One Error | Type Two Error |
|---|---|
| A type one error occurs when you reject a true null hypothesis. | A type two error occurs when you fail to reject a false null hypothesis. |
| The likelihood of a type one error occurring is represented by the variable alpha. | The likelihood of a type two error occurring is represented by the variable beta. |
| It can be considered a false positive or believing something to be true while it isn’t. | It can also be called a false negative or believing something not to be true while it is. |
| An error of this type can lead to an ineffective drug being approved or a policy being implemented that has limited or no effect on the target issue. Resources can be wasted, and people can suffer needlessly from side effects. | An error of this variety can lead to an effective drug being shelved or a policy that helps solve a problem failing to be implemented. It means that the resources and policies that help may not be implemented. |
What Is a Type One Error?
A type one error, also called an alpha error, is rejecting a true null hypothesis. What does that mean? It means that you believe that something is making a noticeable difference when it isn’t. It could also be called confirmation bias, in a way, or a placebo or nocebo effect. Of course, in a statistical sense, it isn’t quite that simple.
The easiest way to think of this is as a false positive. Most people will be familiar with the idea in terms of testing whether or not someone has a disease. A false positive means that it flags the existence of the disease when the person isn’t ill.
Ideally, this doesn’t happen, of course. But as with all measurements, statistical studies, and surveys, there’s a potential for error. This is noted in polls as a “margin of error”. Even in excellent scientific studies, there’s still a possibility for error or reading into it. That being said, if the study is done properly, the possibility for error should be very little.
The reason that type one errors are also called alpha errors is that alpha is assigned the value of the probability that there will be a type one error. So, for example, a study might have a 95% confidence level, but that still leaves a 5% chance of a type one error.
Does this mean we should view all scientific studies with suspicion? Not necessarily, no. Especially since there’s a robust peer review system in place – just for that reason.
What Is a Type Two Error?
A type two error, also known as a beta error, occurs when you fail to reject a false null hypothesis. This means that whatever you were studying did have a noticeable effect, but you incorrectly noted that it didn’t. Therefore, this would also be called a false negative.
For example, this would occur if you were studying whether or not a vaccine lowered the rate of disease.
You’d be looking for a statistically significant difference – which would mean that it isn’t something caused by just a placebo effect or differences in circumstances – that showed that the vaccine lowered infections in the group that was vaccinated as compared to the control group.
If you note that there’s no difference when the vaccine does lower the rate of infection, you committed a type two error.
As with calling a type one error an alpha error, the reason that a type two error is called a beta error is that the variable beta represents the likelihood of one occurring. It should be noted that every statistical analysis has a possibility for both a type one and a type two error, which is something that has to be weighed.
Tips For Avoiding a Type One Error or Type Two Error.
Unfortunately, it isn’t as simple as having a really well-crafted study. This is because lowering the risk of a type one error increases the risk of a type two error and vice versa. This means that researchers and statisticians have to weigh the risk of each and decide which error would be less harmful or try to balance them equally.
It’s also important to remember that researchers and statisticians are also people. And most people who set up a study have hope or belief that it’ll go one way or another. Scientific training teaches us to help overcome these biases and preconceptions, but it isn’t perfect.
The way in which the study is arranged can also have an impact on the results, most especially on surveys – which is why crafting survey questions requires a great deal of skill and attention to detail.
Type one Error vs Type two Error FAQ
-
What is the consequence of a type one error?
The consequences of a type one error are ending up believing something to be true that isn’t. This can lead to an ineffective drug getting approved for treatment, for instance.
Not only might the drug not properly treat the disease or symptoms, but patients will also have to contend with potentially severe side effects, which won’t have the offsetting benefit of helping with their illness.
-
How do you determine the risk of making a type one error?
The risk of making a type one error is assigned to the value of alpha. This will vary greatly depending on the type of study you’re in charge of, but if making a type one error is something you especially want to avoid, you can set the value of alpha to lower than 0.05, or 5%, which is the traditional rate.
-
Is a type one error or a type two error worse?
While most statisticians see a type one error as worse, both can have severe consequences.
A type one error can lead to a lot of wasted resources or people being prescribed ineffective medication, while a type two error can lead to useful policies not being implemented or healthy habits not being recommended, or an effective medicine being scrapped.
It will partly depend on exactly what is being studied, but there isn’t a general consensus as to which is more harmful in the long run.
-
What is statistical significance?
Statistical significance is a term that researchers use to describe whether or not what’s being studied is having a significant, observable effect. It’s used to make sure that neither type one errors nor type two errors occur. The most common threshold is p<0.05, with statistical significance being represented by the variable p.
If the value of p is below the chosen alpha value, then the results are considered to be statistically significant.
- Science Terms
- Parameter vs. Statistic
- Reoccurring vs. Recurring
- Linear vs. Nonlinear
- Observational Study vs. Experiment
- Histogram vs. Bar Graph
- Discrete vs. Continuous
- Validity vs. Reliability
- Type 1 vs. Type 2 Error
- Objective vs. Subjective Data
- Prospective vs. Retrospective Study
- Sample vs. Population
- Interpolation vs. Extrapolation
- Exogenous vs. Endogenous

