The following information is intended to show a few common examples of bad science and/or problems in research – it is not meant exhaustive list nor is it meant to point a wagging finger at scientists. In many if not most cases, problems with studies are a result of interpretation and reporting as opposed to the study itself.
Overgeneralization and Extrapolation of Results: This problem typically occurs when the results of a study from a specific sample are extrapolated to what is believed to be a similar group. An example would be research where a new cholesterol drug was tested on females aged 30-50. Can we, or should we make assumptions on what the drug might do for males or 65 year old women? Absolutely not. Or what about a research study evaluating an after school reading program in New York City. Would the results of this study be applicable in Des Moines, Iowa? Perhaps, but we can not and should not assume that the results would be the same.
Conflict of Interest: You should always look at the conflict of interest statement at the end of a research study as part of your evaluation of potential bias in both study design as well as the data. For example, a recent study compared 1,534 studies involving cancer research. “Studies that had industry funding focused on treatment 62 percent of the time, compared to 36 percent for other studies not funded by industry. And the studies funded by industry focus on epidemiology, prevention, risk factors, screening or diagnostic methods only 20 percent of the time, vs. 47 percent for studies that had declared no industry funding.” – LiveScience.
Absolute vs. Relative Percentages: Suppose that there was a medical problem that caused 2 people in 1,000,000 to have a stroke, and suppose there was a treatment that would reduce the problem to only 1 person per 1,000,000. This would be an improvement of 0.0001% in an absolute sense or NO BIG DEAL. However had I reported the results using relative percentages, I could have stated: “New medical treatment yields a 50% reduction in risk of stroke.” This would obviously be quite misleading, but it is a common practice.
Unpublished Clinical Trials: A study by the Yale School of Medicine found that 50% of clinical trials funded by the National Institutes of Health (NIH) had published their research findings within 30 months of study completion. The problem is also extremely common in research that has been funded by pharmaceutical companies. These unpublished studies may have been withheld to prevent a medical intervention from being shown in a bad light.
This problem of unpublished results is also common in studies with small sample sizes.
And lastly, it is common for researchers to only report results that are statistically significant and thereby leave out data with negative findings. This data would be especially helpful for follow-on research during the literature review process.
Selective Observation: Selective observation is when a researcher is drawn to a particular conclusion based on an existing bias or belief. For example, a researcher that is studying obesity may believe that obese people lack willpower and may construct an experiment that involves a plate of doughnuts in a conference room at work. If that researcher only records data about obese subjects and doesn’t record non-obese subjects, they may have a biased experiment.
- Bad Science (a book by Ben Goldacre)
- Sixty Methodological Potholes
- Conflicts of Interest in Research
- The differences in differences problem
- Statistics problems
- Evaluating Research Quality
- The Truth Wears Off – From The New Yorker