In 2010, Peng et al reported dramatic positive results of methylene blue injection for spinal disc pain in the journal Pain. It looked like great news. I figured it was probably bollocks, but I dutifully reported it as a good news story anyway, probably because I was self-conscious about my reputation for negativity.
Methylene blue is not what made Walter White into Heisenberg. It’s just a chemical with an odd name, hypothetically suited to taming intervertebral disc irritation. In the 2010 experiment, it seemed to work so well that it was “astounding, unprecedented and unrivalled … if the results are true” (Bogduk). And they still seemed true after a follow-up study in 2012 (Kim et al) — but those results were just not exciting on their own. That study wouldn’t have made headlines.
And probably this new study won’t either, because it’s anti-good-news: only several years later, the results of the original study have been shown to be probably untrue.
Kallewaard et al conducted a good quality attempt to replicate Peng et al.’s results. It was a clear negative.
There was just no meaningful difference between patients who got methylene blue versus a placebo plus lidocaine. Lidocaine is no back pain cure, of course; if it were, the original meth-blue results wouldn’t have seemed “astounding, unprecedented and unrivalled.” If methy-blue cannot clearly outperform placebo+lidocaine, it’s not interesting.
Or astounding, unprecedented, or unrivalled. It is, in fact, rivalled.
The only obvious weakness of the study is that it was a little on the small side, a bit “underpowered” as we say of some studies and B-list superheroes. But it wasn’t tiny, and a genuinely potent effect should still show up in a small sample size most of the time.
It’s certainly enough evidence to cast serious doubt on the original findings.
Surprisingly fast and decisive comeuppance
So this is exactly why we never trust “just one study,” no matter how good it looks.
On the one hand, this is a story that has been told many times in medical science: initial results are “promising,” people get excited, headlines blare, maybe it even spawns an empire of clinics offering the “evidence-based” treatment. But then follow-up research eventually establishes that it wasn’t so great after all.
The failure of most treatment ideas is an obvious pattern in medical science over the decades. Never bet against the null hypothesis.
But that corrective process is usually slow and tortuous. It’s rare to get a good quality failure to replicate this quickly, “only” a few years after the initial hype. We don’t usually get to see one good-looking study loudly declaring “looks like something!” and then another soon shooting back “sorry, just not seeing it!” For every case as quick and clear as this one, there are a hundred where the body of evidence is larger and messier, and a hundred more where there’s simply no serious attempt at replication whatsoever.
Musculoskeletal medicine is chock-a-block with “promising” findings that have never been replicated, and never will be.
Why does science about pain treatments have to be such a downer?
Pain is too complex and deeply integrated into our “wiring” for there to ever be a simple solution for it.
The pain system is necessary part of us, like a vital organ that’s everywhere. It’s one of the main reasons we have a nervous system. Half our biochemistry is devoted to dealing with threats, and most components have other critical jobs. Anything potent enough to really shut pain up is also going to shut us down: anaesthesia, opioids, steroids… everything that is to some degree useful has a huge price tag.
And so pain treatment is always going to be some kind of compromise.
I’ve updated my back pain tutorial with this news, of course.