PainScience.com Sensible advice for aches, pains & injuries
 
 
bibliography * The PainScience Bibliography contains plain language summaries of thousands of scientific papers and others sources, like a specialized blog. This page is about a single scientific paper in the bibliography, Cuijpers 2016.

How to prove that your therapy is effective, even when it is not: a guideline

updated
Cuijpers P, Cristea IA. How to prove that your therapy is effective, even when it is not: a guideline. Epidemiol Psychiatr Sci. 2016 Oct;25(5):428–435. PubMed #26411384.
Tags: scientific medicine, critical thinking, fun, bad science

PainSci summary of Cuijpers 2016?This page is one of thousands in the PainScience.com bibliography. It is not a general article: it is focused on a single scientific paper, and it may provide only just enough context for the summary to make sense. Links to other papers and more general information are provided at the bottom of the page, as often as possible. ★★★★☆?4-star ratings are for bigger/better studies and reviews published in more prestigious journals, with only quibbles. Ratings are a highly subjective opinion, and subject to revision at any time. If you think this paper has been incorrectly rated, please let me know.

A clear explanation of all the ways that trials can go wrong — or, as the title mischievously implies, all the ways trials can be made to go wrong. Although written about psychotherapy research, it is directly relevant to musculoskeletal medicine: both fields share the problem of a lots of junky little trials done by professionals trying to prove their pet theories, which produces a lot of “positive” results that just aren’t credible.

A few highlights:

How could you make sure that the randomised trial you do actually results in positive outcomes that your therapy is indeed effective? There are several methods you can use to optimise the chance that your trial will show that the intervention works. Even when in reality it does not really work. The goal of this paper is to describe these ‘techniques’.

… the logic of trials is quite straightforward, but there are several points in the design where the researchers can have an influence on the outcomes of the trial

Several meta-analytic studies have shown that waiting list control groups result in much larger effects for the therapy than other control groups. In fact, a meta-analysis of psychotherapies for depression even suggested waiting list might be a nocebo condition, performing worse than a simple no treatment one.

Instead of showing that your intervention is superior to existing therapies you could also test whether your therapy is not unacceptably worse than a therapy already in use. Such non-inferiority trials are often done to show that a simpler or cheaper treatment is as good as an existing therapy. However in our case, it is better to avoid these trials because they typically need large sample sizes as well. Furthermore, we do not want to show that our treatment is equivalent to existing therapies, because we already know it is better.

In this paper, we described how a committed researcher can design a trial with an optimal chance of finding a positive effect of the examined therapy. There is an abundant literature for the interested reader wanting to learn more about conducting randomised trials. We saw that a strong allegiance towards the therapy, anything that increases expectations and hope in participants, making use of the weak spots of randomised trials (the randomisation procedure, blinding of assessors, ignoring participants who dropped out, and reporting only significant outcomes, while leaving out non-significant ones), small sample sizes, waiting list control groups (but not comparisons with existing interventions) are all methods that can help to find positive effects of your therapy. And if all this fails you can always not publish the outcomes, and just wait until a positive trial shows what you had known from the beginning: that your therapy is effective anyway, regardless of what the trials say. For those who think this is all somewhat exaggerated, all of the techniques described here are very common in research on the effects of many therapies for mental disorders.

original abstractAbstracts here may not perfectly match originals, for a variety of technical and practical reasons. Some abstacts are truncated for my purposes here, if they are particularly long-winded and unhelpful. I occasionally add clarifying notes. And I make some minor corrections.

AIMS: Suppose you are the developer of a new therapy for a mental health problem or you have several years of experience working with such a therapy, and you would like to prove that it is effective. Randomised trials have become the gold standard to prove that interventions are effective, and they are used by treatment guidelines and policy makers to decide whether or not to adopt, implement or fund a therapy.

METHODS: You would want to do such a randomised trial to get your therapy disseminated, but in reality your clinical experience already showed you that the therapy works. How could you do a trial in order to optimise the chance of finding a positive effect?

RESULTS: Methods that can help include a strong allegiance towards the therapy, anything that increases expectations and hope in participants, making use of the weak spots of randomised trials (risk of bias), small sample sizes and waiting list control groups (but not comparisons with existing interventions). And if all that fails one can always not publish the outcomes and wait for positive trials.

CONCLUSIONS: Several methods are available to help you show that your therapy is effective, even when it is not.

related content

These three articles on PainScience.com cite Cuijpers 2016 as a source:


This page is part of the PainScience BIBLIOGRAPHY, which contains plain language summaries of thousands of scientific papers & others sources. It’s like a highly specialized blog. A few highlights: