Detailed guides to painful problems, treatments & more

Nitpicks galore! Four gripes about the exercise-versus-saline study (and four responses)

 •  • by Paul Ingraham
Get posts in your inbox:
Weekly nuggets of pain science news and insight, usually 100-300 words, with the occasional longer post. The blog is the “director’s commentary” on the core content of a library of major articles and books about common painful problems and popular treatments. See the blog archives or updates for the whole site.

A couple weeks ago I wrote about an odd study (Bandak et al) that reported that exercise is no better than saline injections for knee arthritis. That was a seemingly bad-news result for exercise, and it provoked many gripes. Follow-up is needed!

It’s funny, but the first time I shared the exercise-can’t-beat-saline paper on social media (sometime last fall), I actually asked people to nitpick it — in the spirit of cheerfully attacking results we don’t like — and I got almost nothing. 🦗🦗🦗

But when I wrote about it in more detail this month? And emphasized the weird bad-news results? Nitpicks galore! A veritable gush of grievances, a flood of flak. Unfortunately, as much as I actually wanted valid reasons to dislike this paper and scoff at its results, I found none in that bountiful harvest of beefs.

Four common concerns arose:

  1. other exercise evidence says it works
  2. it wasn’t the right exercise
  3. or enough exercise
  4. saline might not actually be inert

“Further Study Is Needed”

xkcd #2268 © by Randall Munroe

Nitpick #1: But all the other evidence says exercise is good for osteoarthritis!

Yes! Correct! But that is not a problem for this study. Before we examine other reasons to dismiss Bandak et al. because it doesn’t seem to be flattering to exercise… it actually kinda is.

The weird result was “exercise no better than saline,” and that’s fascinating and certainly seems like bad news. But it did not show that exercise is actually ineffective for arthritic knees … just that it’s no better than something else that is known to be a little bit helpful. Eight weeks of “supervised neuromuscular exercise” is in fact helpful. Not very helpful, but helpful. The weird fact that saline injections were found to be equally helpful does not take that away.

The body of evidence — not just this one weird paper — strongly suggests that exercise is indeed a modestly effective treatment for hip/knee arthritis. Just a couple months ago, I also wrote about the results of a huge positive meta-analysis: “Do you get any joint pain relief from your workouts? Science says most people do.”

So no worries: whatever we make of Bandak’s exasperating same-as-saline result, it doesn’t contradict that conclusion, and the smart money is still on exercising to ease arthritis.

Photo of man in shorts holding knee, presumably because it hurts.

Yes, the body of evidence does support exercise for osteoarthritis. Bandak et al. actually did not even contradict that!

Nitpick #2: They didn’t study the right exercise

The exercise they studied was just fine. (Or at least it wasn’t obviously inadequate.) This nitpick seemed to be entirely based on just not knowing what the researchers did and filling in the blank with the lamest options. But the authors spelled out their perfectly sensible arthritis-aiding exercise protocol in detail:

“An 8-week structured treatment programme consisting of patient education and supervised neuromuscular exercise for people with symptomatic knee or hip OA.14 In this trial, GLAD was delivered by GLAD-certified physiotherapists at the depart- ment of physiotherapy at Bispebjerg-Frederiksberg Hospital. Two group-based educational sessions lasting about 1.5 hours were provided, addressing knowledge on knee OA, treatment options with a focus on exercise and its benefits, and advice about self-management. The exercise part of GLAD was delivered as 12 1-hour, group-based, individually supervised sessions, two times per week for 6 weeks.”

That might be not the best imaginable intervention… but it’s definitely not just some squats. And the GLA:D programme is definitely something we should expect to work.

The more you increase the sophistication and intensity and duration of exercise therapy, the more problems you will have with real-world effectiveness. That is why it often makes sense to stick to studying interventions that are closer to what normal people are likely to actually do, where compliance isn’t excessively burdensome. In technical terms, this is why we study “effectiveness” (real world benefits) more than “efficacy” (benefits in ideal circumstances), even though both are worth knowing.

Nitpick #3: They didn’t study enough exercise

“Eight weeks is weak sauce.” “Not long enough!” “Just eight weeks, seriously?” And so on.

Meh. I would have preferred more, but you can easily get all kinds of training effects going in just a few weeks. If there’s no major effect in that time, doubling the duration is unlikely to produce impressive results.

But this was the most popular nitpick! “Just 8 weeks?!” was such a predictable objection that I wish Bandak et al. had added more time just to avoid it. It is indeed a limitation — one the authors acknowledged. However, studying short-term benefits is inherently worthwhile, just different, with its own pros and cons.

Also, it’s not like this is the only evidence. As the authors’ note in their limitations section:

“…we only assessed short-term efficacy. However, a similar exercise intervention with longer duration (12 weeks) provided a similar response as the present, and the 18 months of efficacy of exercise were not superior to attention control in the recent START study.”

Also, there’s a one-year follow-up paper, showing that both groups were still the same twelve months later (see Henriksen). (Granted, that’s not the same thing as studying a year’s worth of ongoing exercise … but it does bolster the original result.)

You can dismiss practically any clinical trial results by saying that the intervention wasn’t good enough, or they studied the wrong people. But trying to find the ideal rarely seems to pan out in medical science … especially in rehab and pain medicine, where I’d be hard-pressed to give a single example.

Nitpick #4: Saline might not actually be inert

Photo of a syringe, isolated on white.

The spirit of this objection was that exercise didn’t beat saline because saline is actually effective in its own right. However, that is actually the premise of the study. Comparing exercise to saline injections was based on the well-established certainty that saline injections do in fact help people a little bit.

How they help people is interesting… but beside the point. The answer wouldn’t really change our interpretation of this study. There are two main possibilities:

  1. Saline injections are indeed as biologically trivial as they seem, and whatever good they do is entirely a psychosomatic muting of the pain, what we call an “expectation effect” (but it’s also often just rounded down to “placebo”).
  2. Or maybe injected saline isn’t as biologically neutral as it seems. Squirting a fluid into the knee joint might actually tinker with the biochemistry of the joint, and could be a bit of a legitimate pain-killer in its own right (see Bar-Or). This is possible, but it is implausible that the effect is significant — and this study supports that.

Bandak et al. undoubtedly knew all this. It’s literally why they did what they did.

Like people, all studies have flaws and limitations, but that doesn’t necessarily mean they’re wrong

When we don’t like what a study says, it’s amazing how fast we can decide that any unknown detail — not covered by the abstract — was probably the dumbest possible thing the researchers could’ve done. But these researchers were anything but dumb, and I just don’t think they made any of the mistakes that people feared they did. Nor were their results actually even bad news!

It was a fascinating experiment, and the response to it was also fascinating. Thanks for the discussions everyone, and I’ll leave you with this reminder of how science interpretation “works”…

Flow chart time! I will describe it nicely for you. First cell says: new study published. Second cell: does it confirm my beliefs? If yes… must be a GOOD study. If no… must be a BAD study, so nitpick and find flaws, bad study confirmed! And then both pathways then ultimately lead to the inevitable conclusion: “I was right all along!”

PainSci Member Login » Submit your email to unlock member content. If you can’t remember/access your registration email, please contact me. ~ Paul Ingraham, PainSci Publisher