“The first principle is that you must not fool yourself – and you are the easiest person to fool.”
Scientists are plagued by confirmation bias (interpreting information in a way that confirms one's preexisting beliefs or hypotheses). The most basic requirement for scientific research is the unbiased interpretation of experimental results. Unfortunately, scientists are human, and as humans, susceptible to fooling themselves when looking at their investigational outcomes. American politics is rife with obvious examples of confirmation bias. Scientists are also particularly susceptible to confirmation bias because we create novel hypotheses and then experimentally test whether those ideas are correct.
Confirmation bias grows stronger as we invest more time and energy in our research, often making us the least objective person to interpret the results. Scientists understand the importance of having their science reviewed by experts who did not participate in the research, but peer review usually comes after they have decided their work is worthy of publication or funding. Peer review comments come at a time when confirmation bias is most likely to be applied and underlies the disturbing amount of research misconduct that occurs in responding to peer review. Everyone involved in the research enterprise is susceptible to confirmation bias (faculty, postdoctoral fellows, students, staff scientists and technologists).
To avoid being skewered by confirmation bias, it is necessary to structure your ongoing research practices to prevent bias from creeping into the analysis of results starting at the earliest stages of a research project. Jiangwei and I came up with five tips:
For more on the challenges in experimental science, read our review of Richard Harris' Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions.
St. Jude researchers take a look at Rigor Mortis, Richard Harris’ exposé of how the drive to find results hampers scientific progress.