How to prove that your therapy is effective, even when it is not: a guideline
Six pages on PainSci cite Cuijpers 2016: 1. 14 Kinds of Bogus Citations 2. Most Pain Treatments Damned With Faint Praise 3. Studying the Pain Studies 4. What Works for Chronic Pain? 5. Cherry-picking, nit-picking, and bad science dressed as good science 6. Every little bit counts! Unless it’s not ACTUALLY a little bit…
PainSci notes on Cuijpers 2016:
A clear and whimsical explanation of all the ways that trials can go wrong — or, as the title mischievously implies, all the ways trials can be made to go wrong. Although written about psychotherapy research, it is all highly relevant to musculoskeletal medicine: both fields share the problem of lots of junky little trials done by professionals trying to prove their pet theories, which produces a lot of “positive” results that just aren’t credible.
A few highlights:
How could you make sure that the randomised trial you do actually results in positive outcomes that your therapy is indeed effective? There are several methods you can use to optimise the chance that your trial will show that the intervention works. Even when in reality it does not really work. The goal of this paper is to describe these ‘techniques’.
… the logic of trials is quite straightforward, but there are several points in the design where the researchers can have an influence on the outcomes of the trial
Several meta-analytic studies have shown that waiting list control groups result in much larger effects for the therapy than other control groups. In fact, a meta-analysis of psychotherapies for depression even suggested waiting list might be a nocebo condition, performing worse than a simple no treatment one.
Instead of showing that your intervention is superior to existing therapies you could also test whether your therapy is not unacceptably worse than a therapy already in use. Such non-inferiority trials are often done to show that a simpler or cheaper treatment is as good as an existing therapy. However in our case, it is better to avoid these trials because they typically need large sample sizes as well. Furthermore, we do not want to show that our treatment is equivalent to existing therapies, because we already know it is better.
In this paper, we described how a committed researcher can design a trial with an optimal chance of finding a positive effect of the examined therapy. There is an abundant literature for the interested reader wanting to learn more about conducting randomised trials. We saw that a strong allegiance towards the therapy, anything that increases expectations and hope in participants, making use of the weak spots of randomised trials (the randomisation procedure, blinding of assessors, ignoring participants who dropped out, and reporting only significant outcomes, while leaving out non-significant ones), small sample sizes, waiting list control groups (but not comparisons with existing interventions) are all methods that can help to find positive effects of your therapy. And if all this fails you can always not publish the outcomes, and just wait until a positive trial shows what you had known from the beginning: that your therapy is effective anyway, regardless of what the trials say. For those who think this is all somewhat exaggerated, all of the techniques described here are very common in research on the effects of many therapies for mental disorders.
original abstract †Abstracts here may not perfectly match originals, for a variety of technical and practical reasons. Some abstacts are truncated for my purposes here, if they are particularly long-winded and unhelpful. I occasionally add clarifying notes. And I make some minor corrections.
AIMS: Suppose you are the developer of a new therapy for a mental health problem or you have several years of experience working with such a therapy, and you would like to prove that it is effective. Randomised trials have become the gold standard to prove that interventions are effective, and they are used by treatment guidelines and policy makers to decide whether or not to adopt, implement or fund a therapy.
METHODS: You would want to do such a randomised trial to get your therapy disseminated, but in reality your clinical experience already showed you that the therapy works. How could you do a trial in order to optimise the chance of finding a positive effect?
RESULTS: Methods that can help include a strong allegiance towards the therapy, anything that increases expectations and hope in participants, making use of the weak spots of randomised trials (risk of bias), small sample sizes and waiting list control groups (but not comparisons with existing interventions). And if all that fails one can always not publish the outcomes and wait for positive trials.
CONCLUSIONS: Several methods are available to help you show that your therapy is effective, even when it is not.
This page is part of the PainScience BIBLIOGRAPHY, which contains plain language summaries of thousands of scientific papers & others sources. It’s like a highly specialized blog. A few highlights:
- Common interventional procedures for chronic non-cancer spine pain: a systematic review and network meta-analysis of randomised trials. Wang 2025 BMJ.
- Gabapentinoids and Risk of Hip Fracture. Leung 2024 JAMA Netw Open.
- Classical Conditioning Fails to Elicit Allodynia in an Experimental Study with Healthy Humans. Madden 2017 Pain Med.
- Topical glyceryl trinitrate (GTN) and eccentric exercises in the treatment of mid-portion achilles tendinopathy (the NEAT trial): a randomised double-blind placebo-controlled trial. Kirwan 2024 Br J Sports Med.
- Placebo analgesia in physical and psychological interventions: Systematic review and meta-analysis of three-armed trials. Hohenschurz-Schmidt 2024 Eur J Pain.