PainSci summary of Cuijpers 2016?This page is one of thousands in the PainScience.com bibliography. It is not a general article: it is focused on a single scientific paper, and it may provide only just enough context for the summary to make sense. Links to other papers and more general information are provided at the bottom of the page, as often as possible. ★★★★☆?4-star ratings are for bigger/better studies and reviews published in more prestigious journals, with only quibbles. Ratings are a highly subjective opinion, and subject to revision at any time. If you think this paper has been incorrectly rated, please let me know.
A clear explanation of all the ways that trials can go wrong — or, as the title mischievously implies, all the ways trials can be made to go wrong. Although written about psychotherapy research, it is directly relevant to musculoskeletal medicine: both fields share the problem of a lots of junky little trials done by professionals trying to prove their pet theories, which produces a lot of “positive” results that just aren’t credible.
A few highlights:
How could you make sure that the randomised trial you do actually results in positive outcomes that your therapy is indeed effective? There are several methods you can use to optimise the chance that your trial will show that the intervention works. Even when in reality it does not really work. The goal of this paper is to describe these ‘techniques’.
… the logic of trials is quite straightforward, but there are several points in the design where the researchers can have an influence on the outcomes of the trial
Several meta-analytic studies have shown that waiting list control groups result in much larger effects for the therapy than other control groups. In fact, a meta-analysis of psychotherapies for depression even suggested waiting list might be a nocebo condition, performing worse than a simple no treatment one.
Instead of showing that your intervention is superior to existing therapies you could also test whether your therapy is not unacceptably worse than a therapy already in use. Such non-inferiority trials are often done to show that a simpler or cheaper treatment is as good as an existing therapy. However in our case, it is better to avoid these trials because they typically need large sample sizes as well. Furthermore, we do not want to show that our treatment is equivalent to existing therapies, because we already know it is better.
In this paper, we described how a committed researcher can design a trial with an optimal chance of finding a positive effect of the examined therapy. There is an abundant literature for the interested reader wanting to learn more about conducting randomised trials. We saw that a strong allegiance towards the therapy, anything that increases expectations and hope in participants, making use of the weak spots of randomised trials (the randomisation procedure, blinding of assessors, ignoring participants who dropped out, and reporting only significant outcomes, while leaving out non-significant ones), small sample sizes, waiting list control groups (but not comparisons with existing interventions) are all methods that can help to find positive effects of your therapy. And if all this fails you can always not publish the outcomes, and just wait until a positive trial shows what you had known from the beginning: that your therapy is effective anyway, regardless of what the trials say. For those who think this is all somewhat exaggerated, all of the techniques described here are very common in research on the effects of many therapies for mental disorders.
original abstract†Abstracts here may not perfectly match originals, for a variety of technical and practical reasons. Some abstacts are truncated for my purposes here, if they are particularly long-winded and unhelpful. I occasionally add clarifying notes. And I make some minor corrections.
AIMS: Suppose you are the developer of a new therapy for a mental health problem or you have several years of experience working with such a therapy, and you would like to prove that it is effective. Randomised trials have become the gold standard to prove that interventions are effective, and they are used by treatment guidelines and policy makers to decide whether or not to adopt, implement or fund a therapy.
METHODS: You would want to do such a randomised trial to get your therapy disseminated, but in reality your clinical experience already showed you that the therapy works. How could you do a trial in order to optimise the chance of finding a positive effect?
RESULTS: Methods that can help include a strong allegiance towards the therapy, anything that increases expectations and hope in participants, making use of the weak spots of randomised trials (risk of bias), small sample sizes and waiting list control groups (but not comparisons with existing interventions). And if all that fails one can always not publish the outcomes and wait for positive trials.
CONCLUSIONS: Several methods are available to help you show that your therapy is effective, even when it is not.
These three articles on PainScience.com cite Cuijpers 2016 as a source:
- PS 13 Kinds of Bogus Citations — Classic ways to self-servingly screw up references to science, like “the sneaky reach” or “the uncheckable”
- PS The “Impress Me” Test — Most controversial therapies are fighting over scraps of “positive” evidence that damn them with faint praise
- PS Studying the Studies — Tips and musings about how to understand (and write about) pain and musculoskeletal health science
This page is part of the PainScience BIBLIOGRAPHY, which contains plain language summaries of thousands of scientific papers & others sources. It’s like a highly specialized blog. A few highlights:
- A Bayesian model-averaged meta-analysis of the power pose effect with informed and default priors: the case of felt power. Gronau 2017 Comprehensive Results in Social Psychology.
- The neck and headaches. Bogduk 2014 Neurol Clin.
- Agreement of self-reported items and clinically assessed nerve root involvement (or sciatica) in a primary care setting. Konstantinou 2012 Eur Spine J.
- Effect of NSAIDs on Recovery From Acute Skeletal Muscle Injury: A Systematic Review and Meta-analysis. Morelli 2017 Am J Sports Med.
- Association of Spinal Manipulative Therapy With Clinical Benefit and Harm for Acute Low Back Pain: Systematic Review and Meta-analysis. Paige 2017 JAMA.