PainScience.com Sensible advice for aches, pains & injuries
 
 

Science versus Experience in Musculoskeletal Medicine

The conflict between science and clinical experience and pragmatism in the management of aches, pains, and injuries

updated
by Paul Ingraham, Vancouver, Canadabio
I am a science writer and a former Registered Massage Therapist with a decade of experience treating tough pain cases. I was the Assistant Editor of ScienceBasedMedicine.org for several years. I’ve written hundreds of articles and several books, and I’m known for readable but heavily referenced analysis, with a touch of sass. I am a runner and ultimate player. • more about memore about PainScience.com

These days most good health care professionals take it for granted that treatment ideas need to be blessed by science to some degree. But to what degree? Blessed how much?

Back in the good old days there wasn’t evidence of anything one way or another (absence of evidence) and everyone pretty much did whatever they liked as long as it sounded good and the patients were happy. If you could get people to pay for it, that was good enough! Market-based medicine. Experience-based medicine. What could possibly go wrong? Entire modality empires sprang up out of the fertilizer of hunches and pet theories, many of them reasonable but definitely wrong, and many more “not even wrong.”

As standards have gone up and science has (finally!) started to test some of the 20th Century’s biggest treatment ideas, the results have shown that well-validated large effects in medicine are uncommon,1 in most cases nothing is going on except a creatively induced placebo (evidence of absence of any medical effect)… and placebo isn’t all that powerful and probably should never be justification for a therapy.2 In fact, science has become quite the buzzkill… especially for the treatment of pain and musculoskeletal problems,3 and manual therapists of all kinds — physical therapists, chiropractors, massage therapists — have started to wonder if anything actually works, why they read this damn website anyway, and how they can justify what they are selling without more encouraging trials to point to.

The three most dangerous words in medicine: in my experience.

Mark Crislip, MD

Evidence isn’t everything, and clinical experience and patient buy-in are huge

Despite the rise and importance of Evidence-Based Medicine™, evidence produced by good quality trials isn’t everything. It is not and never has been be the sole criterion for choosing health care interventions. There’s much more to it, and there always has been. Specifically, EBM has always formally, explicitly defined itself as the integration of clinical experience and patient values and preferences and expectations with the best available clinical evidence.

There’s several variations on this chart, but the take-home message is always the same: the application of EBM isn’t just about the evidence.

For instance, a physical therapist deciding whether or not to use dry needling might consider three things):

  1. the evidence supporting dry needling is a bit iffy,
  2. but in his experience it works well for most people,
  3. and yet this patient reacts very poorly to it and doesn’t care for the risk, even if there’s still a possibility of benefit.

Therapy is a process

As Dr. of Physical Therapy Jason Silvernail argues,4 “The manual therapy approach is a ‘process’ of care centred on a reasoning model, not a ‘product’ consisting of one or more manipulative techniques,” and that process may be effective even if individual techniques are unimpressive. Good manual therapy is probably more than the sum of its parts.

Patients cannot meaningfully apply their values and preferences until they are informed, but once they are “informed consent” goes a long way. Professionals can legitimately do a lot sketchy stuff if only they speak the magic words: “This is experimental. It may not work. I think it’s worth trying because yada yada yada and the risks are super low. Do you want to proceed?”

Patients really appreciate that approach. In my experience.

Absence of evidence is actually not a deal breaker. And AoE is still very common, even today. For all the progress we’ve made, pain and musculoskeletal medicine research has still only just scratched the surface.

All of this puts evidence in its place… but that is still a place of honour. Testing treatments matters!5

That said, just exactly how much scientific evidence is actually needed for a theory or technique to be acceptable?

This is the bare minimum required:

But the bar gets raised quickly in proportion to the costs and risks, or if there’s no informed consent. Clearly positive good quality and replicated trial evidence becomes necessary then. And support from bad science only is not enough, which actually disqualifies many treatments (homeopathy, for instance).

Surprise! My standards are low! (Sort of)

I have a reputation for being critical of many (or most?) theories and techniques, so many readers may be surprised by just how low my standards are. But I really do think that many unproven theories and techniques are fair game — assuming they’re fairly safe, cheap, plausible. And haven’t been spanked by good trials yet. Or damned with faint praise by bad, biased ones.

Here’s the “but” though, one big problem that sustains my militant skepticism…

Informed consent is usually broken, because most patients are never properly informed: experimental treatments that might be justifiable are presented to patients as if they are proven. Way too many therapists are grossly overconfident, and wildly overestimate the value of their clinical experience while underestimating the value of scientific evidence. And so patients are routinely presented with cocky self-serving claims of efficacy by therapists.

That is what keeps me cranky.

For example, I think trigger point therapy, despite its many problems,7 is still a defensible approach to some kinds of pain as long as the risks and costs are tamed and it’s presented with strong, humble disclaimers. It’s just fine if a therapist puts it to patients like this:

“I do trigger point therapy, even though no one really knows what trigger points are. We have some theories. The science so far is not very encouraging, and there’s a bunch of controversy. Although there are still reasons for optimism, basically no one can really know yet if we can do anything about them. It’s a gamble, and not cheap. But we’ll be gentle and efficient and I won’t recommend a long expensive course of treatment without promising signs. Do you want to proceed?”

But I have a huge problem with this kind of thing (which is, of course, rarely spelled out):

“Trigger point therapy works! My results speak for themselves. I understand this kind of pain and I can treat it. Now enjoy my magic hands [or needles]… which are going to hurt both your body and your wallet, by the way.”

In the absence of good decisive science — which is all too often — it’s really all about the framing and the humility and the doing-no-harm.

What do you do when confronted with evidence that’s a bummer? At odds with your experience?

I want PainScience.com to be known as an EBM-friendly website, so what do I do when the evidence is contradicted by the clinical experience of your readers? Or my own?

I’m a writer, not a magician. I mostly stay focused on reporting the evidence, and that’s a big enough job.

The artful merging of evidence and experience with the unique special-flowerness of the patient in front of you is a clinical challenge…not my writing challenge. Clinicians have to make decisions based on all three, every day. That’s their job. I left that challenge behind several years ago. These days, my new challenge is to provide clinicians (and patients) with as good a picture of the evidence as I can. I’m a specialist now — I focus on just one of the pillars of EBM. The science-y pillar.

On the other hand, I was also clinician for ten years, and I have constant and deep correspondence with many extremely experienced clinicians today. So there are indeed hat tips to clinical experience here there and everywhere on PainScience.com. I do write about what clinicians believe. But, mostly, I stick to what the evidence can support.

But for you clinicians: when confronted with evidence that’s a bummer, at odds with your experience, remember that your experience is a fully legit third of that EBM‬ equation. But! You must be very cautious not to lean too hard on your experience, because “you are the easiest person to fool” (Feynman). It’s only a third of the equation. Not two thirds. Not half. Just a third, roughly, give or take (probably always less than a third for younger professionals). And it’s never a very reliable third. Just like science, experience is difficult to interpret and often wrong.


About Paul Ingraham

Headshot of Paul Ingraham, short hair, neat beard, suit jacket.

I am a science writer, former massage therapist, and I was the assistant editor at ScienceBasedMedicine.org for several years. I have had my share of injuries and pain challenges as a runner and ultimate player. My wife and I live in downtown Vancouver, Canada. See my full bio and qualifications, or my blog, Writerly. You might run into me on Facebook or Twitter.

Related Reading

What’s new in this article?

OctoberMerged in a couple older blog posts, added several references and footnotes, revised and re-framed et voila: this is now the official new “science versus experience” page for PainScience.com.

Notes

  1. Pereira TV, Horwitz RI, Ioannidis JP. Empirical evaluation of very large treatment effects of medical interventions. JAMA. 2012 Oct;308(16):1676–84. PubMed #23093165.

    A “very large effect” in medical research is probably exaggerated, according to Stanford researchers. Small trials of medical treatments often produce results that seem impressive. However, when more and better trials are performed, the results are usually much less promising. In fact, “most medical interventions have modest effects” and “well-validated large effects are uncommon.”

    BACK TO TEXT
  2. Ingram T, Silvernail J, Benz LN, Flynn TW. A cautionary note on endorsing the placebo effect. J Orthop Sports Phys Ther. 2013 Nov;43(11):849–51. PubMed #24175623. PainSci #54098. BACK TO TEXT
  3. Machado LA, Kamper SJ, Herbert RD, Maher CG, McAuley JH. Analgesic effects of treatments for non-specific low back pain: a meta-analysis of placebo-controlled randomized trials. Rheumatology (Oxford). 2009 May;48(5):520–7. PubMed #19109315. PainSci #54670.

    This is a meticulous, sensible, and readable analysis of the very best studies of back pain treatments that have ever been done: the greatest hits of back pain science. There is a great deal of back pain science to review, but authors Machado, Kamper, Herbert, Maher and McCauley found that shockingly little of it was worth their while: just 34 acceptable studies out of a 1031 candidates, and even among those “trial quality was highly variable.” Their conclusions are derived from only the best sort of scientific experiments: not just the gold-standard of randomized and placebo-controlled tests, but carefully choosing only the “right” kind of placebos (several kinds of placebos were grounds for disqualification, because of their known potential to skew the results). They do a good job of explaining exactly how and why they picked the studies they did, and pre-emptively defending it from a couple common concerns. The results were sad and predictable, robust evidence of absence: “The average effects of treatments … are not much greater those of placebos.”

    BACK TO TEXT
  4. Silvernail J. Manual therapy: process or product? J Man Manip Ther. 2012 May;20(2):109–10. PubMed #23633891. PainSci #54128. BACK TO TEXT
  5. Evans I, Thornton H, Glasziou P. Testing treatments: better research for better healthcare. 2nd ed. Pinter & Martin; 2011. This excellent book is currently available for free from www.TestingTreatments.org. It’s a superb exploration why research matters, and how it’s done. BACK TO TEXT
  6. Pandolfi M, Carreras G. The faulty statistics of complementary alternative medicine (CAM). Eur J Intern Med. 2014 Sep;25(7):607–9. PubMed #24954813. BACK TO TEXT
  7. People experience muscle pain and acutely sensitive spots in muscle tissue that we call “muscle knots.” What’s going on? The dominant theory is that a trigger point is basically an isolated spasm of a small patch of muscle tissue. Unfortunately, trigger point science is half-baked and controversial, and it’s not even clear that it’s a “muscle” problem. Meanwhile, people keep hurting, and massage — especially self-massage — is a safe, cheap, reasonable way to try to help. That’s why I have a large tutorial devoted to how to self-treat “trigger points” — whatever they really are. See Trigger Point Doubts: Do muscle knots exist? Exploring controversies about the existence and nature of so-called “trigger points” and myofascial pain syndrome. BACK TO TEXT