The Power of Barking: Correlation, causation, and how we decide what treatments work
A silly metaphor for a serious point about the confounding power of coincidental and inevitable healing, and why we struggle to interpret our own recovery experiences
“I have the power of barking to thank for that”: hilarious.
This is apropos of why people believe in all kinds of snake oil, even when it doesn’t work, even when it’s actually dangerous, like drinking mercury, electric shocks, or lobotomies (strange but true). The dog’s logic is an absurd but effective way of explaining how humans decide what to believe in, namely:
- simple correlations (B followed A, so A must have caused B)
- emotional priorities (I want to believe A caused B, so it did)
This really is why most of us say thing like “Well, eye-of-newt worked for me,” and look no deeper (even though we clearly should). So let’s look a little deeper now …
When relief follows treatment
When relief follows on the heels of a treatment, most people find it almost impossible to believe that the treatment didn’t cause the relief. We humans are more keen on assuming causation based on B-follows-A correlations than we are on porn or popcorn. Post hoc ergo propter hoc is like catnip for us. We start talking like the dog bragging about scaring off the mailman. Here’s a typical sequence of events:
- frustrating pain persists for weeks or months
- patient tries a new therapy, like Prolotherapy
- relief follows within hours or days, and the treatment gets the credit
Even if the relief sticks, the appearance of success can be a complex illusion, and not a real treatment effect. But, surprisingly often, there’s a weird 4th step …
- the pain comes back, but the patient continues to sing the praises of the treatment!
The benefits of many manual therapy treatments, like spinal manipulation or massage therapy, are notoriously inconsistent and ephemeral as a general rule, and yet sometimes seem to produce amazing results. It’s difficult to explain this pattern in the world of therapy.
Most people (especially the professionals) assume that those success stories indicate that treatment X just happens to work unusually well for a certain kind of patient. And maybe that does explain a few cases, thanks to a combination of luck and knowledge and skill — and that is certainly what therapists and their customers would like to believe.
And we can explain away a few more treatment success stories as a not-so-simple coincidence: recovering at the same time as a new treatment. This suggestion seems outrageous to patients who are enjoying the relief, but it’s not as rare as one might suppose. Why? Because people often seek care at the darkest-before-dawn point right around the time that natural recovery finally starts to happen! Which, of course, it nearly always does.
And yet it can’t possibly explain them all, or how quickly and clearly change can follow treatment.
Thanks, YourLogicalFallacy.com — that is very amusing.
More than a placebo, less than real medicine
There are more likely and prosaic explanations for most success stories. We tend to be unaware of them, or ignore them. We want to believe, and nuances harsh our buzz.
But trust me, there are other ways to explain apparent treatment successes, if only we’d pay attention to them. For instance…
Unreliable treatments in manual therapy are getting most of their success stories the same way: rogue waves of non-specific effects adding up to something more than the sum of their ho-hum parts. Sometimes therapy just makes a bigger impression on the patient because everything went just so: no socially sour notes, a good belly laugh, a reassuring touch at exactly the right moment, a piece of office artwork that reminds the patient of home, a comforting story about what’s wrong that the patient could particularly relate to (but which may well be a little lost on the next patient).
There are countless ways it can go well, and sometimes they really pile up. If the therapeutic interaction is the active ingredient of treatment — and there’s an extremely strong case for that — then it follows that some interactions are better than others, and some are so good that they explain most of the success stories that patients tell for years.
The point is not to dismiss all treatment success as coincidences and placebos — the point is that it’s much more common than most people ever suspect.
So please beware of assuming that your successful experience with a therapy for pain or injury means that it works that well in general, or even for a specific type of patient. Every day someone tells me “it worked for me” as their only reason for believing that it must work for everyone. Even if it really did, you can’t generalize from your personal experience. Countless variables determine the outcome of treatments … and most have nothing to do with the method.
One of many examples of correlations that absolutely exist … but definitely do not mean a damn thing. At least, I’m pretty sure cheese eating doesn’t cause fatal bedsheet tangling. Thanks to TylerVigen.com for this graph & many others like it.
Nature: defying “common sense” since the dawn of intelligence!
Humans somehow understand the point of pointing, but dogs don’t. They’ll usually just lick your finger. It’s just something our brains are good at, and canine brains aren’t.
Causality inference is another potent defining feature of human intelligence. Our ability to suss out how things work is largely based on this “one weird trick” that our brains can do. Flick the switch, light turns on: probably causally related! Touch fire, get burned … throw rock, break window … eat too much, get sick. There are countless simple correlations like this that we master effortlessly before we can even tie our shoes. We see A follow B and we just kinda get it that A caused B.
Unfortunately, we also get it wrong a lot.
We get causality right constantly when the variables are simple and readily observable; we get rarely get it right in health care, or any other complex endeavour, where the variables counts are high and many are subjective or otherwise murky. What’s really going on in a casual relationship almost always turns out to be different and waaaaay more complicated than we thought. Our failures in this department are legion and disastrous. By far the most important thing anyone needs to understand about the relationship between correlation and causation is that A did not necessarily cause B just because B followed A, and making this mistake is one of the Greatest Hits of human thinking glitches.
As Barker Bausell put it (Snake Oil Science), we have a problem with “confusion between correlation and cause on an industrial scale.” Many amusing examples of spurious correlations have been mined from data. This problem has been emphasized ad nauseam by so many smart people for so long that it seems like an unassailable edifice.
And yet …
Actually, correlation kinda does imply causation
Despite all that, the famous rule — “correlation does not imply causation” — is in fact a misleading oversimplification. At the very least it’s missing a word, and it should be “correlation does not necessarily imply causation.”
Or you could just rephrase it entirely. Edward Tufte, an American statistician who made the same point quite a while ago, suggested that a good informal re-wording would be, “Correlation is not causation but it sure is a hint.”
Because correlation actually does “imply” causation, and many (if not most) events that occur in sequence that appear to be causally related are in fact causally related. Their correlation is not a coincidence. Clapping makes noise, braking stops cars, hot coals burn fingers.
Our brains are tuned to detect those relationships, and that mental super power served us well as we grew up as a species. The problem is that we’re so good at it, so aggressive about it, that we overdo it. We see it everywhere, even when it isn’t there. We think we understand causality in situations where causality detection is much harder … like evaluating the results of medical treatments.
The difference between general and specific causality inference
One of the things that trips up our causality inference superpower the most is the tricky difference between inference of causality and the attribution of mechanism. General versus specific causes, basically. We can and routinely do correctly detect causes when correlation gives us a strong enough hint, but we routinely screw up exactly what caused what.
For instance, most people will assume that when a very stubborn old pain goes away during a one-hour acupuncture session that the experience must have caused the relief, because the relief followed the experience. And that assumption is probably correct. The appearance of relief probably isn’t a coincidence, probably not just regression to the mean (too quick). But then most people will then — carelessly or self-servingly — move on to another assumption: that the treatment caused the relief because acupuncture works as advertised. (It doesn’t.)
We can be right about the causality in a wide view — somehow or other, that appointment really did lead to feeling better, so yay — but still be hopeless wrong about what specifically caused what. Most people will ignore the possibility that the true mechanism of relief was not the efficacy of acupuncture, but the efficacy of a caring professional promising aid and performing fascinating rituals that reek of implied potency: the power of “surely no one would do this if it didn’t work!” These factors are wildly underestimated by most acupuncture patients. And acupuncturists.
Related Reading
- Quackery Red Flags — Beware the 3 D’s of quackery: Dubious, Dangerous and Distracting treatments for aches and pains (or anything else)
- Most Pain Treatments Damned With Faint Praise — Most controversial and alternative therapies are fighting over scraps of “positive” scientific evidence that damn them with the faint praise of small effect sizes that cannot impress
- Why “Science”-Based Instead of “Evidence”-Based? — The rationale for making medicine based more on science and not just evidence… which is kinda weird
- Alternative Medicine’s Choice — What should alternative medicine be the alternative to? The alternative to cold and impersonal medicine? Or the alternative to science and reason?
- Statistical Significance Abuse — A lot of research makes scientific evidence seem much more “significant” than it is
- Science versus Experience in Musculoskeletal Medicine — The conflict between science and clinical experience and pragmatism in the management of aches, pains, and injuries
What’s new in this article?
2017 — Added a paragraph that, I hope, summed up a couple key effectively.
2016 — Three deep, dorky new sections about the human talent for causality inference and how it leads us astray. (Man, this is a really weird article, but I’m not giving up on it.)
2013 — Publication.