Detailed guides to painful problems, treatments & more

New “positive” studies rarely change the bottom line (e.g. massage for exercise soreness)

 •  • by Paul Ingraham
Get posts in your inbox:
Weekly nuggets of pain science news and insight, usually 100-300 words, with the occasional longer post. The blog is the “director’s commentary” on the core content of a library of major articles and books about common painful problems and popular treatments. See the blog archives or updates for the whole site.

PainSci Member Login

Submit your email to unlock this audio content (and any other stuff for members). If you can’t remember or access your registration email, please contact me. ~ Paul Ingraham, PainSci Publisher

Privacy & Security of this form This login is private and secure: the information you submit is encrypted, used only to search for matching accounts, and then discarded.

Occasionally someone smugly throws a promising new study in my face to dismiss my historical skepticism on a topic. “You got it wrong,” they say. “Just look at this shiny new evidence!”

But those “positive” results rarely change the bottom line. Here’s how this dance usually goes…

  • Reluctant skepticism — Someone kinda-sorta accepts my cynical reporting on an underwhelming treatment. “Welp,” they say, “Ingraham says it doesn’t work, or not very well, anyway. Uh, thanks for that … I think…”

  • Glorious confirmation bias — Much later, that cynical conclusion seems to be overturned by a shiny new scientific paper. After “reading” it — abstract or press release — they say it sure seems positive. And so they think, “Good news: Chlorine Gas Snorting Therapy works after all! I guess that Ingraham guy was wrong.”

  • Smug revenge — And then they write to tell me how wrong I was. Or they post it on social media. And sometimes the tone is caustic: “New evidence proves skeptic was just a cranky dumb-dumb!”

  • Skepticism rejected — Of course, I’m often already aware of the paper, but I check it anyway, and confirm the nearly inevitable: it’s a shite study that just muddies the water. And then I explain why it wasn’t so “promising” after all. Which has little effect, because who’s going to embrace my curmudgeonly take when there’s a much happier version of the story backed by a Genuine Citation?

For instance: massage for DOMS

I have seen the story above play out ad nauseam about massage for delayed-onset muscle soreness. A “new” (2017) analyses like Guo et al certainly does seem positive. And so people cite it and declare that I must have gotten it wrong on massage-for-soreness. After that close call with cognitive dissonance, they go back to believing that the soreness can be rubbed out.

Except I didn’t get it wrong.

Don’t get me wrong, I can get things wrong! But not this. I had that paper in my bibliography all along with a private note-to-self: “weak sauce ‘positive’ meta-analysis on massage for DOMS, worthless, changes nothing!” I hadn’t gotten around to referencing and dismissing it because it’s just kind of terrible.

Unfortunately, just because a scientific review or meta-analysis seems superficially positive really doesn’t mean much. There are many gotchas! They are so complex that they are ripe for abuse, just as prone to bias-powered error and misrepresentation as clinical trials, if not more so. Meta-analysis is the major practical example of how data can be tortured until it tells you what you want to hear (see Ioannidis), and it’s particularly prone to it when studies are underpowered — a huge problem with pain and rehab science.