Detailed guides to painful problems, treatments & more

How I got a study about pain and anxiety all wrong (and got set straight)

 •  • by Paul Ingraham
Get posts in your inbox:
Weekly nuggets of pain science news and insight, usually 100-300 words, with the occasional longer post. The blog is the “director’s commentary” on the core content of a library of major articles and books about common painful problems and popular treatments. See the blog archives or updates for the whole site.

Recently I tweeted this, regarding Cremers 2021:

Study sez: misconceptions boost pain-related disability…and even more so with more depression/anxiety. Scarier myths = more disability!

So… debunking GOOD.

(The formal moderator vs mediator distinction is quite the brain teaser — but the ideas aren’t exotic.)

Interesting. And … wrong. I got credibly corrected almost immediately. This will be a bit dorky, but I think it’s an ideal example of how getting corrected should work, and a nice “teachable moment” I can pass on. You can benefit from my embarrassment. 😜

I’ve been working on a reboot of the chapter of my back pain book about the relationship between back pain and psychological factors like anxiety and depression. Historically I had high confidence that anxiety was a driver of back pain — not calling back pain an “all" in your head thing, but an affected by what’s in your head thing.

But I have become much more concerned about the danger of gaslighting patients with any speculation about psychological factors in pain. And so I want to start over with this topic … and so I’ve been reading many scientific paper … and so I enthusiastically shared that summary, thinking I had found a nice little research gem … and then Dr. James Coyne was concerned:

Mediation and moderation analysis and cross-sectional data are a killer of any credibility. These studies are done without any contribution to the literature.

😬 Not what I want to see in my inbox! Except I do want to actually understand, of course.

Dr. Coyne generously elaborated when I asked for a bit of help, and I soo saw my mistake. Although my summary fairly represented the authors’ conclusions, those conclusions reach beyond the data. How did I miss that? I was definitely a little embarrassed. This is exactly the kind of thing I’m supposed to catch.

I think I was a bit hypnotized by the fancy ”mediation and moderation analysis,“ which is basically a way of studying how variables are related. I was so busy wrapping my head around it that I overlooked something else: the data is cross-sectional, and — by the authors' own admission — is not capable of showing causality. And yet they waved that limitation away, speculated about causality anyway, and it ended up in the abstract as a conclusion.

As Dr. Coyne wrote, this is “a classic move in the game of promoting weak data … you acknowledge the limitation, but you are not deterred by it, and should be.”

There’s nothing unreasonable about it as a hypothesis: things might actually work like Cremers et al. said. Truly. They really might.

Pain intensity might cause more disability.

And it might do it via the indirect effect of fearful misconceptions (the “mediator”).

And anxiety and depression might make the fearful misconceptions worse (the “moderator”).

It could all be true.

But this study doesn’t prove any of that. And this kind of over-interpretation of cross-sectional data is so common it’s practically standard.

More on about anxiety and back pain coming soon.

Dr. James Coyne is a psychologist who critically analyzes claims and research in psychology.