Erroneous analyses of interactions in neuroscience: a problem of significance
PainSci summary of Nieuwenhuis 2011 ★★★★☆?4-star ratings are for larger and better studies and reviews published in more prestigious journals, with only quibbles. Ratings are a highly subjective opinion, and subject to revision at any time. If you think this paper has been incorrectly rated, please let me know.
This research identified a major common problem in scientific papers. It was described by Ben Goldacre for The Guardian as “a stark statistical error so widespread it appears in about half of all the published papers surveyed from the academic neuroscience research literature.” Dr. Steven Novella also wrote about it for ScienceBasedMedicine.org recently, adding that “there is no reason to believe that it is unique to neuroscience research or more common in neuroscience than in other areas of research.”
In theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the other is not (P> 0.05). We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience) and found that 78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular neuroscience. We discuss scenarios in which the erroneous procedure is particularly beguiling.
- “Why Most Published Research Findings Are False,” an article in PLoS Medicine, 2005.
One article on PainScience.com cites Nieuwenhuis 2011 as a source:
- PS Statistical Significance Abuse — A lot of research makes scientific evidence seem more “significant” than it is