I first wrote about the Functional Movement Screen in 2011, mostly hype-griping: “I think the marketing cart may be in front of the research horse.” It particularly rankled me that FMS was (and still is) being mis-used by many practitioners as a way of diagnosing allegedly dysfunctional movement as the origin of injuries and chronic pain, which over-reaches its intended purpose in a crassly self-serving way. FMS promoters were making claims about a “growing body of research,” of course, but it wasn’t yet persuasive to me back then … and maybe things have gotten worse.
A summer 2014 paper by Whiteside et al echoes my original concerns, but with data. The researchers focused on the accuracy of FMS grading in particular: “virtually no investigations have probed the accuracy of FMS grades assigned by a manual tester.” So they probed it! They compared “the FMS scores assigned by a certified FMS tester to those measured by an objective inertial-based motion capture system.” Alas for FMS, the results were “poor,” which is exactly what I’ve been betting on all along.
Manual grading may not provide a valid measurement instrument. The levels of agreement between the two grading methods were poor in all six FMS exercises. It appears that manual grading of the FMS is confounded by vague grading criteria.
The discussion section of the article is detailed, readable, and full of ominous understatement. “Dubious grading presents a concern for FMS clientele,” they write. They graciously allow that, with better objective criteria, FMS grading might “improve to acceptable levels.” Meanwhile, FMS testers are officially encouraged to aim for lower scores when in doubt, but in this test, even under scrutiny, apparently they didn’t have much self-doubt, consistently scoring “0.54 points higher than the IMU system.” (I’m shocked, simply shocked, to learn that FMS practitioners might be a tad overconfident!) The authors also point out that FMS has not only failed to reliably forecast injuries, but all FMS predictions may be “a product of specious grading.” Which is hardly surprising, since FMS fails to take into account “several factors that contribute to musculoskeletal injury.” These concerns must be addressed “before the FMS can be considered a reliable injury screening tool.”
Clearly more research is needed — of course! Naturally! But it’s worse than that:
The high potential for subjective and/or inaccurate grading implies that standard procedures must be developed before FMS performance and injury rates can be conclusively studied.
Before it can be studied. They seem to be saying that not only is the cart is still in front of the FMS horse, the horse may now be falling well behind. FMS research so far may be a bit of a write-off, because it can’t inform us without better criteria, and everyone should probably just go back to the drawing board and try again. Which suggests that my article about FMS is still reasonably sound after three years without an update.