Science is not meant to cure us of mystery, but to reinvent and reinvigorate it.
~ Dr. Robert Sapolsky, from his classic book, Why Zebras Don't Get Ulcers
These days most good health care professionals take it for granted that treatment ideas should be blessed by science to some degree. But to what degree? Blessed how much? Blessed how?
Despite that, there is still a serious lack of evidence-based practice across the board. It is getting better, but it’s slow.1 There are some signs of improvement (with back pain particularly), but musculoskeletal medicine is still a cocky teenager, just starting to come of age and figure out that it doesn’t know everything.
Back in the good old days there wasn’t evidence of anything one way or another (absence of evidence) and everyone pretty much did whatever they liked as long as it sounded good and the patients were happy. If you could get people to pay for it, that was good enough! Market-based medicine. Experience-based medicine. What could possibly go wrong? Entire modality empires sprang up out of the fertilizer of hunches and pet theories, many of them reasonable but definitely wrong, and many more “not even wrong.”
As standards have gone up and science has (finally!) started to test some of the 20th Century’s biggest treatment ideas, the results have shown that well-validated large effects in medicine are uncommon,2 in most cases nothing is going on except a creatively induced placebo (evidence of absence of any medical effect)… and placebo isn’t all that powerful and probably should never be justification for a therapy.3
In fact, science has become quite the buzzkill … especially for the treatment of pain and musculoskeletal problems,4 and manual therapists of all kinds — physical therapists, chiropractors, massage therapists — have started to wonder if anything actually works, why they read this damn website anyway, and how they can justify what they are selling without more encouraging trials to point to.
The three most dangerous words in medicine: in my experience.
~ Mark Crislip, MD
Evidence isn’t everything, and clinical experience and patient buy-in are huge
Despite the rise and importance of Evidence-Based Medicine™, evidence produced by good quality trials isn’t everything. It is not and never has been be the sole criterion for choosing health care interventions. There’s much more to it, and there always has been. Specifically, EBM has always formally, explicitly defined itself as the integration of clinical experience and patient values and preferences and expectations with the best available clinical evidence.
There’s several variations on this chart, but the take-home message is always the same: the application of EBM isn’t just about the evidence.
For instance, a physical therapist deciding whether or not to use dry needling might consider three things):
- the evidence supporting dry needling is a bit iffy,
- but in his experience it works well for most people,
- and yet this patient reacts very poorly to it and doesn’t care for the risk, even if there’s still a possibility of benefit.
Therapy is a process
As Dr. of Physical Therapy Jason Silvernail argues,5 “The manual therapy approach is a ‘process’ of care centred on a reasoning model, not a ‘product’ consisting of one or more manipulative techniques,” and that process may be effective even if individual techniques are unimpressive. Good manual therapy is probably more than the sum of its parts.
Patients cannot meaningfully apply their values and preferences until they are informed, but once they are “informed consent” goes a long way. Professionals can legitimately do a lot sketchy stuff if only they speak the magic words: “This is experimental. It may not work. I think it’s worth trying because yada yada yada and the risks are super low. Do you want to proceed?”
Patients really appreciate that approach. In my experience.
Absence of evidence is actually not a deal breaker. And AoE is still very common, even today. For all the progress we’ve made, pain and musculoskeletal medicine research has still only just scratched the surface.
All of this puts evidence in its place … but that is still a place of honour. Testing treatments matters!6
There’s a stand-up comedy routine in which Chris Rock makes fun of people who say things like ‘I take care of my kids!’ or ‘I’ve never been to jail!’ His punchline? ‘You’re supposed to take care of your kids. You’re supposed to stay out of jail. They aren’t things you can boast about.’
‘Evidence-based’ strikes me like that. You’re supposed to use evidence. It’s not something you get to brag about.
That said, just exactly how much scientific evidence is actually needed for a theory or technique to be acceptable?
This is the bare minimum required:
- Biological plausibility. It has to make sense. If the idea is daft — if it’s at odds with any well-established biology, chemistry, physics — that’s a deal-breaker. Goodbye, therapeutic touch/Reiki. Important and under-appreciated: testing of implausible treatments tends to produce false positives.7
- There can’t be evidence-of-absence. If there’s persuasive trial evidence that shows no benefit, or damns a technique with very faint praise (which is extremely common), that’s another deal-breaker. Goodbye, glucosamine.
But the bar gets raised quickly in proportion to the costs and risks, or if there’s no informed consent. Clearly positive good quality and replicated trial evidence becomes necessary then. And support from bad science only is not enough, which actually disqualifies many treatments (homeopathy, for instance).
What if there’s new, positive evidence? What then?
This happens quite a bit: there’s a new study of a treatment out with positive results that contradicts a history of negative or only weakly positive results. Does it move the needle? Does it mean it’s closer to being worth a try?
Never for just one study, no! No matter how good it looks. The scientific publishing industry is basically spewing low-quality studies like a crap firehose, and some of those are crappy for reasons we can’t even see. It would take a lot more than just one positive study to reverse the negative trend enough to hit my own threshold for “worth a try.” Given a history of negative results, it would probably take at least three strongly positive trials from with no glaring methodological flaws or researcher biases … and that still wouldn’t be be “proof,” not by a long shot. But it would swing the pendulum enough that I might endorse the gamble (depending on the costs and risks too, of course).
As long as the costs/risks are low enough, I’m actually not that hard to please …
Surprise! My standards are low! (Sort of)
I have a reputation for being critical of many (or most?) theories and techniques, so many readers may be surprised by just how low my standards are. But I really do think that many unproven theories and techniques are fair game — assuming they’re fairly safe, cheap, plausible. And haven’t been spanked by good trials yet. Or damned with faint praise by bad, biased ones.
Here’s the “but” though, one big problem that sustains my militant skepticism …
That is what keeps me cranky.
For example, I think trigger point therapy, despite its many problems,8 is still a defensible approach to some kinds of pain as long as the risks and costs are tamed and it’s presented with strong, humble disclaimers. It’s just fine if a therapist puts it to patients like this:
“I do trigger point therapy, even though no one really knows what trigger points are. We have some theories. The science so far is not very encouraging, and there’s a bunch of controversy. Although there are still reasons for optimism, basically no one can really know yet if we can do anything about them. It’s a gamble, and not cheap. But we’ll be gentle and efficient and I won’t recommend a long expensive course of treatment without promising signs. Do you want to proceed?”
But I have a huge problem with this kind of thing (which is, of course, rarely spelled out):
“Trigger point therapy works! My results speak for themselves. I understand this kind of pain and I can treat it. Now enjoy my magic hands [or needles]… which are going to hurt both your body and your wallet, by the way.”
In the absence of good decisive science — which is all too often — it’s really all about the framing and the humility and the doing-no-harm.
What do you do when confronted with evidence that’s a bummer? At odds with your experience?
I want PainScience.com to be known as an EBM-friendly website, so what do I do when the evidence is contradicted by the clinical experience of your readers? Or my own?
I’m a writer, not a magician. I mostly stay focused on reporting the evidence, and that’s a big enough job.
The artful merging of evidence and experience with the unique special-flowerness of the patient in front of you is a clinical challenge … not my writing challenge. Clinicians have to make decisions based on all three, every day. That’s their job. I left that challenge behind several years ago. These days, my new challenge is to provide clinicians (and patients) with as good a picture of the evidence as I can. I’m a specialist now — I focus on just one of the pillars of EBM. The science-y pillar.
On the other hand, I was also clinician for ten years, and I have constant and deep correspondence with many extremely experienced clinicians today. So there are indeed hat tips to clinical experience here there and everywhere on PainScience.com. I do write about what clinicians believe. But, mostly, I stick to what the evidence can support.
But for you clinicians: when confronted with evidence that’s a bummer, at odds with your experience, remember that your experience is a fully legit third of that EBM equation. But! You must be very cautious not to lean too hard on your experience, because “you are the easiest person to fool” (Feynman). It’s only a third of the equation. Not two thirds. Not half. Just a third, roughly, give or take (probably always less than a third for younger professionals). And it’s never a very reliable third. Just like science, experience is difficult to interpret and often wrong.
About Paul Ingraham
I am a science writer, former massage therapist, and I was the assistant editor at ScienceBasedMedicine.org for several years. I have had my share of injuries and pain challenges as a runner and ultimate player. My wife and I live in downtown Vancouver, Canada. See my full bio and qualifications, or my blog, Writerly. You might run into me on Facebook or Twitter.
- Quackery Red Flags — Beware the 3 D's of quackery: Dubious, Dangerous and Distracting treatments for aches and pains (or anything else)
- The “Impress Me” Test — Most controversial therapies are fighting over scraps of “positive” evidence that damn them with faint praise
- Why “Science”-Based Instead of “Evidence”-Based? — The rationale for making medicine more science-based
- Alternative Medicine’s Choice: Alternative to What? — Alternative to what? To cold and impersonal medicine? Or to science and reason?
- Ioannidis: Making Medical Science Look Bad Since 2005 — A famous and excellent scientific paper … with an alarmingly misleading title
- Statistical Significance Abuse — A lot of research makes scientific evidence seem more “significant” than it is
- Insurance Is Not Evidence — Debunking the idea that “it must be good if insurance companies pay for it”
- The Power of Barking — A silly metaphor for a serious point about correlation, causation, and how we decide what treatments work
What’s new in this article?
2018 — Added minor point about how new positive evidence affects a record of negative evidence.
2017 — Merged in a couple older blog posts, added several references and footnotes, revised and re-framed et voila: this is now the official new “science versus experience” page for PainScience.com.
- Grant HM, Tjoumakaris FP, Maltenfort MG, Freedman KB. Levels of Evidence in the Clinical Sports Medicine Literature: Are We Getting Better Over Time? Am J Sports Med. 2014 Apr;42(7):1738–1742. PubMed #24758781. ❐ ⤻
- Pereira TV, Horwitz RI, Ioannidis JP. Empirical evaluation of very large treatment effects of medical interventions. JAMA. 2012 Oct;308(16):1676–84. PubMed #23093165. ❐
A “very large effect” in medical research is probably exaggerated, according to Stanford researchers. Small trials of medical treatments often produce results that seem impressive. However, when more and better trials are performed, the results are usually much less promising. In fact, “most medical interventions have modest effects” and “well-validated large effects are uncommon.”⤻
- Ingram T, Silvernail J, Benz LN, Flynn TW. A cautionary note on endorsing the placebo effect. J Orthop Sports Phys Ther. 2013 Nov;43(11):849–51. PubMed #24175623. ❐ PainSci #54098. ❐ “We feel strongly that our patients deserve scientifically defensible care that is more than just artfully delivered placebo.” ⤻
- Machado LA, Kamper SJ, Herbert RD, Maher CG, McAuley JH. Analgesic effects of treatments for non-specific low back pain: a meta-analysis of placebo-controlled randomized trials. Rheumatology (Oxford). 2009 May;48(5):520–7. PubMed #19109315. ❐ PainSci #54670. ❐
This is a meticulous, sensible, and readable analysis of the very best studies of back pain treatments that have ever been done: the greatest hits of back pain science. There is a great deal of back pain science to review, but authors Machado, Kamper, Herbert, Maher and McCauley found that shockingly little of it was worth their while: just 34 acceptable studies out of a 1031 candidates, and even among those “trial quality was highly variable.” Their conclusions are derived from only the best sort of scientific experiments: not just the gold-standard of randomized and placebo-controlled tests, but carefully choosing only the “right” kind of placebos (several kinds of placebos were grounds for disqualification, because of their known potential to skew the results). They do a good job of explaining exactly how and why they picked the studies they did, and pre-emptively defending it from a couple common concerns. The results were sad and predictable, robust evidence of absence: “The average effects of treatments … are not much greater those of placebos.”⤻
- Silvernail J. Manual therapy: process or product? J Man Manip Ther. 2012 May;20(2):109–10. PubMed #23633891. ❐ PainSci #54128. ❐ ⤻
- Evans I, Thornton H, Glasziou P. Testing treatments: better research for better healthcare. 2nd ed. Pinter & Martin; 2011. This excellent book is currently available for free from www.TestingTreatments.org. It’s a superb exploration why research matters, and how it’s done. ⤻
- Pandolfi M, Carreras G. The faulty statistics of complementary alternative medicine (CAM). Eur J Intern Med. 2014 Sep;25(7):607–9. PubMed #24954813. ❐ ⤻
- People experience muscle pain and acutely sensitive spots in muscle tissue that we call “muscle knots.” What’s going on? The dominant theory is that a trigger point is basically an isolated spasm of a small patch of muscle tissue. Unfortunately, trigger point science is half-baked and controversial, and it’s not even clear that it’s a “muscle” problem. Meanwhile, people keep hurting, and massage — especially self-massage — is a safe, cheap, reasonable way to try to help. That’s why I have a large tutorial devoted to how to self-treat “trigger points” — whatever they really are. See Trigger Point Doubts: Do muscle knots exist? Exploring controversies about the existence and nature of so-called “trigger points” and myofascial pain syndrome. ⤻