Detailed guides to painful problems, treatments & more

Most Pain Treatments Damned With Faint Praise

Most controversial and alternative therapies are fighting over scraps of “positive” scientific evidence that damn them with the faint praise of small effect sizes that cannot impress

Paul Ingraham • 10m read

It is common for those who promote dubious therapies and treatments to claim scientific support based on studies that were technically positive — but when you look at the data you only find evidence of a trivial beneficial effect. The evidence may be slightly positive, but it fails to impress. The treatment is damned with faint praise.

Surprisingly, weak evidence is also exploited by people who should know better. Scientific reviews and clinical guidelines often include treatment recommendations that are based on inadequate evidence.1 Many forces motivate such carelessness.

If a therapy actually works well it should be easy to prove it.2 Although large treatment effects are quite rare in medicine in general— because biology is so complicated, and people are so different — they should be impressive enough to leave little room for argument. When a treatment is clearly shown to be effective, it’s exciting! It makes headlines, and it should.

But it’s also incredibly rare.

Most slightly “positive” study results are actually just bogus

The weaker a positive result, the more likely it is to actually be misleading: not actually positive at all. There are several ways that a positive study can actually be negative …

Early studies of a treatment tend to be sketchier and “positive,” often conducted by proponents trying to produce scientific justification for their methods. Eventually less biased investigators do better quality studies, and the results are negative. This is a classic pattern in the history of science in general, especially medicine.12

So you can see why I’m a little skeptical when someone enthusiastically shows me one paper from an obscure journal reporting a “significant” benefit to, say, acupuncture — which has probably been the subject of more of these “positive” studies than any other treatment.

Meme contrasting “the abstract” (a big lynx reaching out with one paw) with “the paper” (a cute widdle housecat in the same posture).

Better than nothing?

If you’re a glass-is-half-full person, you might be happy to say that weakly positive results are “better than nothing.” Science says chiropractic adjustment of my back might improve my back pain by 3%? Heck, I’ll take 3%!

Sometimes the better-than-nothing interpretation is fair and fine,13 and I’ve used it myself many times. But please don’t confuse optimistic pragmatism with actual knowledge. Weakly positive results, even real ones, do not mean it’s truly established that a treatment “works a little bit.” The bar for that is higher.

Specifically, a treatment has to beat the null hypothesis in most studies over time. This is hard.

The null hypothesis — a pillar of the scientific method — is that most hopeful theories amount to nothing when carefully checked. In plain English, the null hypothesis says, “Most ideas turn out to be wrong.” And therefore most weakly positive results will turn out to be the product of bias and wishful thinking. And that’s fine.

Treatments should be considered useless until proven effective. The burden of proof is on the pusher of the idea (and it’s a heavy burden). Treatments must work well and clearly to actually beat the null hypothesis. They must impress! Until they do, the null hypothesis looms over them, still very likely to win in the long run.

The null hypothesis has kicked a lot of theories butts over the centuries. It is still the champ.

Maybe it’s just hard to confirm?

The wishful thinker is also inclined to say, “But maybe there is a strong effect and it’s just erratic, hard for science to pin down!”

Perhaps.

But all such objections are forms of special pleading for an exception,14 of protesting that “science doesn’t know everything” (which is a classic, common non-sequitur from people defending quackery15) Yes, science might catch up and validate something previously missed.

But it’s unlikely, and even if it’s real, erratic benefits that are so hard for science to pin down that we can’t find any promising evidence that it exists also tend to be awkward or useless in practice. If a standardized treatment protocol can’t deliver the goods in a somewhat reliable fashion, it’s not really useful medicine — or at least it’s not medicine I want to spend my money on until its “erratic” nature is better understood.

“A promising treatment is often in fact merely the larval stage of a disappointing one. At least a third of influential trials suggesting benefit may either ultimately be contradicted or turn out to have exaggerated effectiveness.”

Bastian, 2006, J R Soc Med

Fighting over scraps

The science of painful problems is still surprisingly rudimentary and preliminary. We can try to critically assess it, and I do, but “replication needed” is usually all that really needs to be said. That covers all the bases. At the end of the day, if slightly promising results cannot be confirmed by other researchers, it doesn’t really matter what was wrong with the original research. Either a treatment works well enough to consistently produce impressive results … or it doesn’t.

Controversy about many popular therapies is much ado about not much, and we’re mostly fighting over pathetic scraps of evidence. After decades of study, the effectiveness of a therapy should be clear and significant in order to justify its continued existence or “more study.” If it’s still hopelessly mired in controversy after so many years — more than a century in some cases (*cough* homeopathy *cough*) — how good can it possibly be? Why would anyone — patient or professional — feel enthusiastic about a therapy that can’t clearly show its superiority in a fair scientific test? Where’s the value in even debating a therapy that is clearly not working any miracles, that has a trivial benefit at best?

The long-term persistence of such debate constitutes evidence of absence. Several dozen crappy studies with weekly positive results is roughly equivalent to proof that there’s no beef, with or without high quality studies to put the nail in the coffin. More research is a waste of time and resources.16

Science, as they say, really delivers the goods: missions to Mars, long lives, the internet. A therapy has to deliver the goods. It’s got to help most people a fair amount and most of the time … or who cares?

Until it impresses you, it’s just some idea that hasn’t yet showed much promise.

It’s okay not to know

Stylized image of Carl Sagan’s face captioned with the large, bold acronym “WWCSD”.

What would Carl Sagan do? Always a good question.

Readers and patients are forever asking me what my “hunch” is about a therapy: does it work? Is there anything to it? I’m honoured that my opinion is so sought after, but I usually won’t take the bait. Like Carl Sagan, “I try not to think with my gut.”

It’s okay not to know. It’s okay for the jury to be out.

And it had better be, because there’s still a great deal of mystery in musculoskeletal health science. Most of the scientific evidence that I interpret for readers of PainScience fails the “impress me” test. Even when that evidence is actually positive — and it’s hard to tell — it’s often only slightly positive. Even when there’s evidence that a therapy works, it’s usually weak evidence: some studies concluded that maybe it helps some people, some of the time … while other studies, almost always the better ones, showed no effect at all. I’m supposed to get excited about this? To justify real confidence in a therapy, we want really good evidence, evidence that makes you sit up and take notice, evidence that ends arguments because it’s just that clear.

Anything less fails to impress!

I don’t want to believe. I want to know.

Carl Sagan

We must somehow find a way to make peace with limited information, eagerly seeking more, without being dogmatic about premature conclusions.

Science and The Game Of 20 Questions, by Val Jones

About Paul Ingraham

Headshot of Paul Ingraham, short hair, neat beard, suit jacket.

I am a science writer in Vancouver, Canada. I was a Registered Massage Therapist for a decade and the assistant editor of ScienceBasedMedicine.org for several years. I’ve had many injuries as a runner and ultimate player, and I’ve been a chronic pain patient myself since 2015. Full bio. See you on Facebook or Twitter., or subscribe:

What’s new in this article?

2020 — An unusually large batch of typo corrections. (I don’t normally consider correcting typos to be worthy of logging an update, but in this case… sheesh.)

2017 — Added a couple citations and an important technical point about false positives when “testing magic.”

2016 — Science update: citation to Pereira 2012 about the lack of large treatment effects in medicine.

2009 — Publication.

Notes

  1. Colquhoun D. Recommendations are made in the absence of any good treatments. BMJ. 2017;(358):j3975. Dr. David Colquhoun briefly but persuasively argues that clinical guidelines and scientific reviews routinely make recommendations based on inadequate evidence, substantially due to a common failure to appreciate the risk of false positives in positive studies of treatments with low prior plausibility: “every false positive not only harms patients (and budgets) but also provides ammunition for the antiscience brigade, who are now so evident.”
  2. Standard proof caveat: nothing is ever truly “proved,” of course.” When we talk of proof in science, we don’t mean total certainty, but more like the certainty you feel about the sun rising tomorrow.
  3. Nuzzo R. Scientific method: statistical errors. Nature. 2014 Feb;506(7487):150–2. PubMed 24522584 ❐ “The more implausible the hypothesis — telepathy, aliens, homeopathy — the greater the chance that an exciting finding is a false alarm, no matter what the P value is.”
  4. Pandolfi M, Carreras G. The faulty statistics of complementary alternative medicine (CAM). Eur J Intern Med. 2014 Sep;25(7):607–9. PubMed 24954813 ❐
  5. The word “significant” in scientific abstracts is routinely misleading. It does not mean that the results are large or meaningful, and in fact is used to hide precisely the opposite. When only “significance” is mentioned, it almost invariably refers to the notoriously problematic “p-value,” a technically-true distraction from the more meaningful truth of a tiny “effect size”: results that are not actually impressive. This practice has been considered bad form by experts for decades, but is still extremely common. See Statistical Significance Abuse: A lot of research makes scientific evidence seem much more “significant” than it is.

  6. One of my favourites is another technically correct but misleading stats term, “trending.” When the results are positive but not statistically insignificant, paper authors will often still summarize by saying that there was a “positive trend” in the data: not enough to claim significance, mind you, but not actually negative. It’s a good way of making a worthless study still sound a little positive.
  7. A “predatory journal” is a fraudulent journal that publishes anything for pay (literally anything, even gibberish), without peer review. This is a new kind of junk science, as bad as any pseudoscience. These “journals” are scams: their purpose is to rip off academics who are desperate to “publish or perish.” There are thousands of predatory journals now, many of which have high superficial legitimacy (they look a lot like real journals, e.g. actually indexed in PubMed). Some of the research is undoubtedly earnest, but cannot be trusted without peer-review. See Gasparyan et al and 13 Kinds of Bogus Citations.
  8. Except it’s usually noteworthy that, even by cheating and lying and bending every rule in their favour, they still couldn’t produce better results!
  9. Ingraham. 13 Kinds of Bogus Citations: Classic ways to self-servingly screw up references to science, like “the sneaky reach” or “the uncheckable”. PainScience.com. 6094 words.
  10. Cuijpers P, Cristea IA. How to prove that your therapy is effective, even when it is not: a guideline. Epidemiol Psychiatr Sci. 2016 Oct;25(5):428–435. PubMed 26411384 ❐ A clear explanation of all the ways that trials can go wrong — or, as the title mischievously implies, all the ways trials can be made to go wrong. Although written about psychotherapy research, it is directly relevant to musculoskeletal medicine: both fields share the problem of lots of junky little trials done by professionals trying to prove their pet theories, which produces a lot of “positive” results that just aren’t credible.
  11. Ioannidis J. Why Most Published Research Findings Are False. PLoS Medicine. 2005 08;2(8):e124. PainSci Bibliography 55463 ❐
  12. Pereira TV, Horwitz RI, Ioannidis JPA. Empirical evaluation of very large treatment effects of medical interventions. JAMA. 2012 Oct;308(16):1676–84. PubMed 23093165 ❐

    A “very large effect” in medical research is probably exaggerated, according to Stanford researchers. Small trials of medical treatments often produce results that seem impressive. However, when more and better trials are performed, the results are usually much less promising. In fact, “most medical interventions have modest effects” and “well-validated large effects are uncommon.”

  13. Whether you use unimpressive positive results to justify giving a treatment a try depends largely on other factors: Is it expensive? Is it dangerous? Will it interfere with other, better treatment options? And so on. It’s a pragmatic calculation, not a scientific conclusion.
  14. Special pleading is an informal fallacy: claiming an exception to a general trend or principle without actually establishing that it is, either using a thin rationalization or even just using the exception as evidence for itself (“the rules don’t apply to my claim because my claim is an exception to the rule”).

  15. It’s true but obvious, and irrelevant to their point … which is that their kooky treatment beliefs are so exotic that they are immune to investigation and criticism, beyond the reach of science. Nope! Not even close! It’s like declaring a leaky old canoe to be seaworthy because we don’t yet know everything about the ocean depths.
  16. Gorski DH, Novella SP. Clinical trials of integrative medicine: testing whether magic works? Trends in Molecular Medicine. 2014. PainSci Bibliography 53769 ❐

    A lot of dead horses are getting beaten in alternative medicine: pointlessly studying silly treatments like homeopathy and reiki over and over again, as if it’s going to tell us something we don’t already know. This point has been made ad infinitum on ScienceBasedMedicine.org since its founding in 2009, but here Drs. Novella and Gorski make the case against testing “whether magic works” in a high-impact journal, Trends in Molecular Medicine.

Permalinks

https://www.painscience.com/articles/impress-me-test.php

PainScience.com/impress_me
PainScience.com/the_impress_me_test

linking guide

2,750 words