PainSci summary of Nieuwenhuis 2011?This page is one of thousands in the PainScience.com bibliography. It is not a general article: it is focused on a single scientific paper, and it may provide only just enough context for the summary to make sense. Links to other papers and more general information are provided at the bottom of the page, as often as possible. ★★★★☆4-star ratings are for bigger/better studies and reviews published in more prestigious journals, with only quibbles. Ratings are a highly subjective opinion, and subject to revision at any time. If you think this paper has been incorrectly rated, please let me know.
This research identified a major common problem in scientific papers. It was described by Ben Goldacre for The Guardian as “a stark statistical error so widespread it appears in about half of all the published papers surveyed from the academic neuroscience research literature.” Dr. Steven Novella also wrote about it for ScienceBasedMedicine.org recently, adding that “there is no reason to believe that it is unique to neuroscience research or more common in neuroscience than in other areas of research.”
original abstract†Abstracts here may not perfectly match originals, for a variety of technical and practical reasons. Some abstacts are truncated for my purposes here, if they are particularly long-winded and unhelpful. I occasionally add clarifying notes. And I make some minor corrections.
In theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the other is not (P> 0.05). We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience) and found that 78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular neuroscience. We discuss scenarios in which the erroneous procedure is particularly beguiling.
- “Why Most Published Research Findings Are False,” John Ioannidis, PLoS Medicine, 2005.
One article on PainScience.com cites Nieuwenhuis 2011 as a source:
- PS Statistical Significance Abuse — A lot of research makes scientific evidence seem more “significant” than it is
This page is part of the PainScience BIBLIOGRAPHY, which contains plain language summaries of thousands of scientific papers & others sources. It’s like a highly specialized blog. A few highlights:
- Effectiveness of customised foot orthoses for Achilles tendinopathy: a randomised controlled trial. Munteanu 2015 Br J Sports Med.
- A Bayesian model-averaged meta-analysis of the power pose effect with informed and default priors: the case of felt power. Gronau 2017 Comprehensive Results in Social Psychology.
- The neck and headaches. Bogduk 2014 Neurol Clin.
- Agreement of self-reported items and clinically assessed nerve root involvement (or sciatica) in a primary care setting. Konstantinou 2012 Eur Spine J.
- Effect of NSAIDs on Recovery From Acute Skeletal Muscle Injury: A Systematic Review and Meta-analysis. Morelli 2017 Am J Sports Med.