In this third article in the LabCoat Guide to BioStatistics series, we learn about Type I and Type II errors. In the previous articles in this series, we explored the Scientific Method and Proposing Hypotheses. Future articles will cover: Designing and implementing experiments (Significance, Power, Effect, Variance, Replication and Randomization), Critically evaluating experimental data (Q-test; SD, SE and 95%CI), and Concluding whether to accept or reject the hypothesis (F- and T-tests, Chi-square, ANOVA and post-ANOVA testing).
The scientist is not a person who gives the right answers; he is one who asks the right questions – Claude Levi-Strauss (French anthropologist)
To experimentally test a hypothesis “IF herbicide safeners reduce herbicide phytotoxicity, THEN herbicide safeners could reduce insecticide phytotoxicity,” we might predict (the Null hypothesis) that there is no difference between the safened and unsafened treatments. Accordingly, we could perform an experiment in which we treat plants with insecticides in the absence or presence of safeners and observe for differences in phytotoxicity between treatments.
During experimentation, researchers will be exposed to two types of error related to hypotheses.
• A Type-I error occurs when the Null hypothesis is rejected, even though it is correct. This is the “flash in the pan” type error, where you believe you have discovered something extraordinary, even though it is not.
• A Type-II error occurs when the Null hypothesis is not rejected, despite it being false. This is the “one that got away” type error, where you miss out on something that really is extraordinary.
Figure 1: Type I and Type II errors.
Let us consider our hypothesis for an experiment to determine whether there is a difference in phytotoxicity between two treatments. We can formulate this as:
• Null hypothesis: There is no difference between treatments
• Alternate hypothesis: There is a difference between treatments
In this example, a Type-I error would lead us to reject the Null hypothesis, claiming that there IS a difference between treatments (false positive) when there is none. A Type-II error would lead us to not reject the Null hypothesis, erroneously concluding that there is NOT a difference in phytotoxicity between the treatments.
Although Type I and Type II errors can never be avoided entirely, we can reduce their likelihood by increasing sample size – the number of independently assigned experimental units that receive the same treatment.
Two parameters determining appropriate sample sizes will be addressed in the next article on Statistical significance (for Type-I errors) and Power (for Type-II errors).
The first two books in the LABCOAT GUIDE TO CROP PROTECTION series are now published and available in eBook and Print formats!
Aimed at students, professionals, and others wishing to understand basic aspects of Pesticide and Biopesticide Mode Of Action & Formulation and Strategic R&D Management, this series is an easily accessible introduction to essential principles of Crop Protection Development and Research Management.
A little about myself
I am a Plant Scientist with a background in Molecular Plant Biology and Crop Protection.
20 years ago, I worked at Copenhagen University and the University of Adelaide on plant responses to biotic and abiotic stress in crops.
At that time, biology-based crop protection strategies had not taken off commercially, so I transitioned to conventional (chemical) crop protection R&D at Cheminova, later FMC.
During this period, public opinion, as well as increasing regulatory requirements, gradually closed the door of opportunity for conventional crop protection strategies, while the biological crop protection technology I had contributed to earlier began to reach commercial viability.