Science versus Experience in Musculoskeletal Medicine
The conflict between science and clinical experience and pragmatism in the management of aches, pains, and injuries
Science is not meant to cure us of mystery, but to reinvent and reinvigorate it.
Dr. Robert Sapolsky, from his classic book, Why Zebras Don't Get Ulcers
These days most good health care professionals take it for granted that treatment ideas should be blessed by science to some degree. But to what degree? Blessed how much? Blessed how?
Despite the good intentions, there is still a serious lack of evidence-based practice across the board. It is getting better, but it’s slow.1 There are some signs of improvement (with back pain particularly), but musculoskeletal medicine is still a cocky teenager, just starting to come of age and figure out that it doesn’t know everything.
Back in the good old days there wasn’t evidence of anything one way or another (absence of evidence) and everyone pretty much did whatever they liked as long as it sounded good and the patients were happy. If you could get people to pay for it, that was good enough! Market-based medicine. Experience-based medicine. What could possibly go wrong? Entire modality empires sprang up out of the fertilizer of hunches and pet theories, many of them reasonable but definitely wrong, and many more “not even wrong.”
As standards have gone up and science has (finally!) started to test some of the 20th Century’s biggest treatment ideas, we’ve learned that there are a shocking number of low-value medical practices.2 Well-validated large effects in medicine are uncommon;3 in most cases nothing is going on except a creatively induced placebo (evidence of absence of any medical effect)… and placebo isn’t all that powerful and probably should never be justification for a therapy.4
In fact, science has become quite the buzzkill … especially for the treatment of pain and musculoskeletal problems,5 and manual therapists of all kinds — physical therapists, chiropractors, massage therapists — have started to wonder if anything actually works (, why they read this damn website anyway, and how they can justify what they are selling without more encouraging trials to point to.
(Yes, a few things do work for pain. Just shockingly few.)
The three most dangerous words in medicine: in my experience.
Mark Crislip, MD
Evidence isn’t everything, and clinical experience and patient buy-in are huge
Despite the rise and importance of Evidence-Based Medicine™, evidence produced by good quality trials isn’t everything. It is not and never has been be the sole criterion for choosing health care interventions. There’s much more to it, and there always has been. Specifically, EBM has always formally, explicitly defined itself as the integration of clinical experience and patient values and preferences and expectations with the best available clinical evidence.
There’s several variations on this chart, but the take-home message is always the same: the application of EBM isn’t just about the evidence.
For instance, a physical therapist deciding whether or not to use dry needling might consider three things:
- the evidence supporting dry needling is a bit iffy,
- but in his experience it works well for most people,
- and yet this patient reacts poorly to it and doesn’t care for the risk, even if there’s still a possibility of benefit.
Dr. Brad Schoenfeld on achieving this balance:
Evidence-based practice isn’t merely deferring to research for answers. Rather, it involves synthesizing the body of literature to develop general guidelines, then using your personal expertise to customize prescription to the individual. This is why the best practitioners have spent considerable time in the trenches, experimenting with different strategies both personally and with clients to hone their understanding of how to bridge the gap between science and practice.
Therapy is a process
As Dr. of Physical Therapy Jason Silvernail argues,6 “The manual therapy approach is a ‘process’ of care centred on a reasoning model, not a ‘product’ consisting of one or more manipulative techniques.” And that process may be effective even if individual techniques are unimpressive. Good care is more than the sum of its parts.
Patients cannot meaningfully apply their values and preferences until they are informed, but, once they are, “informed consent” has a lot of power. Professionals can legitimately do a lot sketchy stuff if only they speak the magic words: “This is experimental. It may not work. I think it’s worth trying because yada yada yada and the risks are super low. Do you want to proceed?”
Patients really appreciate that approach. In my experience.
Absence of evidence is actually not a deal breaker, and it is still very common, even today. For all the progress we’ve made, pain and musculoskeletal medicine research has still only just scratched the surface. There is still a great deal that is “unproven” simply because no one has really checked properly yet.
All of this puts evidence in its place … but that is still a place of honour. Testing treatments matters!7
There’s a stand-up comedy routine in which Chris Rock makes fun of people who say things like ‘I take care of my kids!’ or ‘I’ve never been to jail!’ His punchline? ‘You’re supposed to take care of your kids. You’re supposed to stay out of jail. They aren’t things you can boast about.’
‘Evidence-based’ strikes me like that. You’re supposed to use evidence. It’s not something you get to brag about.
That said, just exactly how much scientific evidence is actually needed for a theory or technique to be acceptable?
It has to make sense, and it can’t have already failed in fair testing. This is the bare minimum required. A little more detail:
- Biological plausibility. It has to make sense. It cannot be at odds with any well-established biology, chemistry, physics — that’s a deal-breaker. Goodbye, therapeutic touch and Reiki. Goodbye homeopathy. Goodbye applied kinesiology. And here’s an important but under-appreciated point: testing of highly implausible treatments tends to produce false positives.8 True story!
- There can’t be actual evidence-of-absence. If there’s persuasive trial evidence that shows no benefit, or damns a technique with very faint praise (which is actually more common), that’s another deal-breaker. Goodbye glucosamine. Goodbye ultrasound. Goodbye platelet-rich plasma.
But the bar gets raised quickly in proportion to the costs and risks, or if there’s no informed consent. Clearly positive good quality and replicated trial evidence becomes necessary then. And support from bad science only is not enough, which actually disqualifies many treatments (homeopathy, for instance).
The more senior the colleague, the less importance he or she placed on the need for anything as mundane as evidence. Experience, it seems, is worth any amount of evidence. These colleagues have a touching faith in clinical experience, which has been defined as “making the same mistakes with increasing confidence over an impressive number of years.”
Isaacs et al., 2001, The Oncologist
What if there’s new, positive evidence? What then?
This happens quite a bit: there’s a new study of a treatment out with positive results that contradicts a history of negative or only weakly positive results. Does it move the needle? Does it mean it’s closer to being worth a try?
Never for just one study, no! No matter how good it looks. Just no.
The scientific publishing industry is spewing low-quality studies like a crap firehose, and many of those are bogus for reasons we can’t even see. It would take a lot more than just one positive study to reverse the negative trend enough to hit my own threshold for “worth a try.” Given a history of negative results, it would probably take at least three strongly positive trials from with no glaring methodological flaws or researcher biases … and that still wouldn’t be “proof,” not by a long shot. But it would swing the pendulum enough that I might endorse the gamble (depending on the costs and risks too, of course).
As long as the costs/risks are low enough, I’m actually not that hard to please …
Surprise! My standards are low! (Sort of)
I have a reputation for being critical of many (or most?) theories and techniques, so many readers may be surprised by just how low my standards are for what constitutes adequate scientific support. But I really do think that many unproven theories and techniques are fair game — assuming they’re fairly safe, cheap, plausible. And if they haven’t been spanked by good trials yet. And if the patient is fully informed.
Here’s the “but” though, one big problem that sustains my militant skepticism …
That is what keeps me cranky and critical: not treatment based on too little evidence, but treatment based on too much hunch and bravado.
For example, I think trigger point therapy, despite its many problems,9 is still a defensible approach to some kinds of pain as long as the risks and costs are tamed and it’s presented with humble disclaimers. It’s just fine if a therapist puts it to patients like this:
“I do trigger point therapy, even though no one really knows what trigger points are. We have some theories. The science so far is not very encouraging, and there’s a bunch of controversy. Although there are still reasons for optimism, basically no one can really know yet if we can do anything about them. It’s a gamble, and not cheap. But we’ll be gentle and efficient and I won’t recommend a long expensive course of treatment without promising signs. Do you want to proceed?”
But I have a huge problem with this kind of thing (which is rarely actually said out loud, but is what’s actually going on):
“Trigger point therapy works! My results speak for themselves. I understand this kind of pain and I can treat it. Now enjoy my magic hands [or needles]… which are going to hurt both your body and your wallet, by the way.”
In the absence of good decisive science — which is all too often — it’s really all about the framing and the humility and the doing-no-harm.
What do you do when confronted with evidence that’s a bummer? At odds with your experience?
I want PainScience.com to be known as an EBM-friendly website, so what do I do when the evidence is contradicted by the clinical experience of my professional readers?
I’m a writer, not a magician: I just stay focused on reporting the evidence, and that’s more than enough for one lifetime.
The artful merging of evidence and experience with the unique special-flowerness of the patient in front of you is a clinical challenge … not my writing challenge. Clinicians have to make decisions based on all three of those factors, all day, every day. That’s their job. I left that challenge behind several years ago. These days, my new challenge is to provide clinicians (and patients) with as good a picture of the evidence as I can. I’m a specialist now, focussing on just one of the pillars of EBM: the science-y pillar.
On the other hand, I was also a clinician for ten years, and I correspond constantly with many extremely experienced clinicians now. So there are hat tips to clinical experience here there and everywhere on PainScience.com. I do write about what clinicians believe. But, mostly, I stick to what the evidence can support — because that’s all I have time for, if nothing else.
But for you clinicians…
When confronted with scientific evidence that’s a bummer, at odds with your experience, remember that your experience is a fully legit third of the EBM equation. But! You must be very cautious not to lean too hard on your experience, because “you are the easiest person to fool” (Feynman). It’s only a third of the equation. Not two thirds. Not half. Just a third, roughly, give or take (probably always less than a third for younger professionals). And it’s never a very reliable third. Just like science, experience is difficult to interpret and often wrong.
Is it possible to care about both research and patients? Yes!
I hear quite a bit of this: “I am more concerned with helping my patients than what the research says.” This sentiment is almost always a defensive response to criticism of a treatment method. For example, if I say, “Good studies have shown that dry needling isn’t effective,”10 I am likely to get a response like, “You’re an armchair therapist. All you care about is research. I care about my patients, and in my experience dry needling helps them.”
It’s based on the ungracious and incorrect assumption that professionals who are concerned about “what the research says” are less concerned about helping patients. That’s absurd. No one is special or unique for wanting the best for your patients.
You should care about research because it can help you help your patients. That’s what everyone wants.
I still see routinely see patients and professionals recoiling from EBM flaws that don’t exist. We still see the mistaken belief that applying science to healthcare will make it cold and impersonal. Here’s physical therapist Dr. Jules Rothstein addressing that fear in 2001 … and his reassurance is just as relevant today:11
We need to make certain that, as we move to a better form of practice, we continue to put patients first. Nothing could be more humanistic than using evidence to find the best possible approaches to care. We can have science and accountability while retaining all the humanistic principles and behaviors that are our legacy.
Prof. Jules Rothstein, PT, PhD
Science versus practice from the patient perspective
Reader Kirsten Loop asked these questions on the PainScience.com Facebook page:
Honest question here. So, the idea that pain practitioners of whatever sort (as opposed to clients) a) can’t make promises about their treatments for pain because most scientific studies are problematic and b) shouldn’t make promises based on pseudo-science (or unsupported opinion)...what are the clients in pain supposed to do? Apparently, we’re also not supposed to (gasp!) “self-diagnose” or otherwise draw conclusions about our own pain issues -- unless, of course, we can think deeply and critically about them. (Which, according to most professionals, we aren’t capable of doing. You’d think we’re all a buncha fools the way we apparently fall prey to the snake oil salespeople out there, according to many in the pain science community.) But even critical thinking about non-proven treatments is practically ‘sinful.’ So, we’re to do... nothing? Just wait for the scientific system to fix itself, pick up the pace, eventually, I presume, through interest, funding, rigorous testing, and unbiased reporting of results? That’s gonna take a long time. I’ll be dead by then. I’m not a manual therapist and therefore I’m not demoralized by the dawning realization that my original training was a biomedically based lie. I am a civilian with pain. People are not well served by deliberate and accidental flakes, and they are also not well served by the snail’s pace of science that takes even longer to translate into treatments that practitioners can use in the real world. Decades in many cases. So, I’m beginning to wonder what’s the point of all of this? No one knows sh*t :)
They were good, difficult questions, worthy of a high quality reply. Here is how I responded:
That frustration is justified and historically appropriate. It’s the right reaction to this annoying chapter in the history of pain medicine. Like a suffragette in the early 20th Century, it makes complete sense to be outraged by the circumstances in which we find ourselves. Or maybe a medical analogy is more appropriate: it’s like having a bacterial infection before antibiotics.
We are indeed awkwardly stuck between half-baked science and quacks/flakes trying to provide the answers that science still can’t. Fortunately, that doesn’t mean we are completely screwed. There is a functional compromise between the extremes. Like all good compromises, it tends to make everyone unhappy. But it exists.
Basically, the middle ground consists of experimental treatment and informed consent, prioritizing the most plausible options and rejecting the most ridiculous. It looks like this:
PROVIDER: I don’t know if this works. No one can know if this is effective. There are some minor risks, and it could be a waste of time and money. But it’s still a reasonable thing to try, as long as you understand and accept that it’s experimental. Are you cool with that?
PATIENT: 👍🏻
Or a more patient-o-centric version:
PATIENT: I want to try [whatever]. I know it’s not proven and I know there are risks. But let’s chat about them. Are you willing to provide that therapy, as long as I’m okay with the uncertainties?
PROVIDER: 👍🏻
Ideally there would be more discussion about WHY it’s a reasonable thing to try, of course. 😉
To date, there really is no such thing as strictly “evidence-based medicine” for most kinds of chronic pain. But that doesn’t mean that the half-baked science is useless: we can still use it to evaluate and prioritize treatment options. And we must! Because there is nothing else.
About Paul Ingraham
I am a science writer in Vancouver, Canada. I was a Registered Massage Therapist for a decade and the assistant editor of ScienceBasedMedicine.org for several years. I’ve had many injuries as a runner and ultimate player, and I’ve been a chronic pain patient myself since 2015. Full bio. See you on Facebook or Twitter., or subscribe:
Related Reading
- Speculation-Based Medicine — Alternative medicine prioritize experience and speculation over evidence (and then tends to ignore the evidence when it finally arrives)
- Quackery Red Flags — Beware the 3 D’s of quackery: Dubious, Dangerous and Distracting treatments for aches and pains (or anything else)
- Most Pain Treatments Damned With Faint Praise — Most controversial and alternative therapies are fighting over scraps of “positive” scientific evidence that damn them with the faint praise of small effect sizes that cannot impress
- Why “Science”-Based Instead of “Evidence”-Based? — The rationale for making medicine based more on science and not just evidence… which is kinda weird
- Alternative Medicine’s Choice — What should alternative medicine be the alternative to? The alternative to cold and impersonal medicine? Or the alternative to science and reason?
- Ioannidis: Making Medical Science Look Bad Since 2005 — A famous and excellent scientific paper … with an alarmingly misleading title
- Statistical Significance Abuse — A lot of research makes scientific evidence seem much more “significant” than it is
- Insurance Is Not Evidence — Debunking the idea that “it must be good if insurance companies pay for it”
- The Power of Barking: Correlation, causation, and how we decide what treatments work — A silly metaphor for a serious point about the confounding power of coincidental and inevitable healing, and why we struggle to interpret our own recovery experiences
- ‘Reductionism’ Is Not an Insult — Reducing complex systems in nature to their components is not a bad thing
- Confirmation Bias — Confirmation bias is the human habit of twisting our perceptions and thoughts to confirm what we want to believe
What’s new in this article?
2020 — Expanded and polished. Added two new sections, “Science versus practice from the patient perspective,” and “Is it possible to care about both research and patients? Yes!” Added a citation about low-value medical interventions (Herrera-Perez), another about dry needling (Stieven). Added a terrific quote about eminence-based medicine from a class paper (Isaacs), and another about how to “bridge the gap between science and practice.”
2018 — Added minor point about how new positive evidence affects a record of negative evidence.
2017 — Merged in a couple older blog posts, added several references and footnotes, revised and re-framed et voila: this is now the official new “science versus experience” page for PainScience.com.
2016 — Publication.
Notes
- Grant HM, Tjoumakaris FP, Maltenfort MG, Freedman KB. Levels of Evidence in the Clinical Sports Medicine Literature: Are We Getting Better Over Time? Am J Sports Med. 2014 Apr;42(7):1738–1742. PubMed 24758781 ❐
- Herrera-Perez D, Haslam A, Crain T, et al. Meta-Research: A comprehensive review of randomized clinical trials in three medical journals reveals 396 medical reversals. eLIFE. 2019 Jun 11;8(e45183). PainSci Bibliography 52236 ❐ “Low-value medical practices are medical practices that are either ineffective or that cost more than other options but only offer similar effectiveness.”
- Pereira TV, Horwitz RI, Ioannidis JPA. Empirical evaluation of very large treatment effects of medical interventions. JAMA. 2012 Oct;308(16):1676–84. PubMed 23093165 ❐
A “very large effect” in medical research is probably exaggerated, according to Stanford researchers. Small trials of medical treatments often produce results that seem impressive. However, when more and better trials are performed, the results are usually much less promising. In fact, “most medical interventions have modest effects” and “well-validated large effects are uncommon.”
“We feel strongly that our patients deserve scientifically defensible care that is more than just artfully delivered placebo.”
Ingram et al., 2013, Journal of Orthopaedic & Sports Physical Therapy
- Machado LAC, Kamper SJ, Herbert RD, Maher CG, McAuley JH. Analgesic effects of treatments for non-specific low back pain: a meta-analysis of placebo-controlled randomized trials. Rheumatology (Oxford). 2009 May;48(5):520–7. PubMed 19109315 ❐ PainSci Bibliography 54670 ❐
This is a meticulous, sensible, and readable analysis of the very best studies of back pain treatments that have ever been done: the greatest hits of back pain science. There is a great deal of back pain science to review, but authors Machado, Kamper, Herbert, Maher and McCauley found that shockingly little of it was worth their while: just 34 acceptable studies out of a 1031 candidates, and even among those “trial quality was highly variable.” Their conclusions are derived from only the best sort of scientific experiments: not just the gold-standard of randomized and placebo-controlled tests, but carefully choosing only the “right” kind of placebos (several kinds of placebos were grounds for disqualification, because of their known potential to skew the results). They do a good job of explaining exactly how and why they picked the studies they did, and pre-emptively defending it from a couple common concerns. The results were sad and predictable, robust evidence of absence: “The average effects of treatments … are not much greater than those of placebos.”
- Silvernail J. Manual therapy: process or product? J Man Manip Ther. 2012 May;20(2):109–10. PubMed 23633891 ❐ PainSci Bibliography 54128 ❐
- Evans I, Thornton H, Glasziou P. Testing treatments: better research for better healthcare. 2nd ed. Pinter & Martin; 2011.
This excellent book is currently available for free from www.TestingTreatments.org. It’s a superb exploration why research matters, and how it’s done.
- Pandolfi M, Carreras G. The faulty statistics of complementary alternative medicine (CAM). Eur J Intern Med. 2014 Sep;25(7):607–9. PubMed 24954813 ❐
- People experience muscle pain and acutely sensitive spots in muscle tissue that we call “muscle knots.” What’s going on? The dominant theory is that a trigger point is an isolated spasm of a small patch of muscle tissue. Unfortunately, it’s just a theory, trigger point science is half-baked and controversial, and it’s not even clear that trigger points are even a problem with muscle. Meanwhile, people keep hurting, and massage — especially self-massage — is a safe, cheap, reasonable way to try to help. That’s why I have a large tutorial devoted to how to self-treat “trigger points” — whatever they really are. See Trigger Point Doubts: Do muscle knots exist? Exploring controversies about the existence and nature of so-called “trigger points” and myofascial pain syndrome.
- Stieven FF, Ferreira GE, Wiebusch M, et al. No Added Benefit of Combining Dry Needling With Guideline-Based Physical Therapy When Managing Chronic Neck Pain: A Randomized Controlled Trial. J Orthop Sports Phys Ther. 2020 Apr:1–21. PubMed 32272030 ❐
- Rothstein JM. Thirty-Second Mary McMillan Lecture: journeys beyond the horizon. Phys Ther. 2001 Nov;81(11):1817–29. PubMed 11694175 ❐ PainSci Bibliography 51998 ❐