Back in the day, I believed that sitting was a risk factor for back pain. Why did I think that? It was based on my “common sense” as a massage therapist, confirmed by observation of my very skewed patient data sample. For more than a decade, I warned a great many readers to beware of too much sitting.
Eventually I noticed that the evidence really didn’t support that position. And so, in 2017, like any self-respecting science journalist should do, I heeded the evidence and officially changed my mind, and I made a bit of a blogging meal out of confessing my mistake:
This is a correction of a major error, reversing an opinion I strongly promoted for more than a decade, until 2017. A lot of time spent in chairs may be unhealthy in some ways, but they are not the back torture device I once thought. There’s not much wiggle room on this point: many studies have shown that people who sit a lot simply do not get more back pain than more active people. There is no link. It’s just not a thing.
And then I proceeded to cite the evidence, which is fairly persuasive: multiple studies all showing no correlation whatsoever between sedentariness and back pain. I switched from warning people about the dangers of sitting to warning people about the dangers of believing that their back is so fragile that it is hurt by sitting. See The Trouble with Chairs: The science of being sedentary and how much it does (or doesn’t) affect your health and back pain.
What I did not know was that there was already a study at that time that might have been better than anything I was citing — I had simply missed it — and that study pointed the other direction. It supported my original bias, and contradicted my new, evidence-based bias. Ruh roh!
Measuring total laziness with accelerometers
Gupta et al appears to be the first-ever study of the relationship between back pain severity and objectively measured total sitting time — not just sitting at work, and not just measured by self-report but by accelerometers.
There’s some heavy duty number crunching in this one, which makes it harder to evaluate. There’s plenty of room for flaws to hide in all that number crunching, but there’s nothing obviously wrong with the study. Seems like good methodology to me, likely to produce more reliable results. Unfortunately, it also still appears to be the only study using this methodology, so it still really needs to be replicated — all the more so because it is reporting results that are at odds with all the other studies, even if their methodology was obviously inferior.
Perhaps this is actually a (rare) case of new, improved approach finally producing a clearer, better answer than previous research could. Or maybe it’s just more noise. And that is why we need some replication.
I think it’s odd that the risk of back pain from occupational-sitting time alone was not statistically significant. Not sure what to make of that, but it seems like confirmation that the signal is hard to separate from the noise. Every dataset has weird subsets, but why would that subset be anomalous like that? If sitting a lot increases the risk of back pain, then any large segment of sitting time should show about the same association, and pretty clearly.
Not saying it’s a deal-breaker, just a bit of a head-scratcher, and another reason replication is needed.
But overall it’s fairly compelling, and it completely contradicts the shiny new evidence-based position that I conspicuously adopted in 2017! If these results can be reinforced by other studies, then it will be my first-ever example of following the evidence back to a previously abandoned position. Whee!