Sensible advice for aches, pains & injuries

Is Diagnosis for Pain Problems Reliable?

Reliability science shows that health professionals can’t agree on many popular theories about why you’re in pain

updated (first published 2009)
by Paul Ingraham, Vancouver, Canadabio
I am a science writer and a former Registered Massage Therapist with a decade of experience treating tough pain cases. I was the Assistant Editor of for several years. I’ve written hundreds of articles and several books, and I’m known for readable but heavily referenced analysis, with a touch of sass. I am a runner and ultimate player. • more about memore about

Many painful problems are surprisingly mysterious, and there are many theories about why people hurt. Debate can rage for years about whether or not a problem even exists. For instance, chiropractic “subluxations” have been a hot topic for decades now: are these little spinal dislocations actually real? What if five different chiropractors all looked at you, but each diagnosed different spots in your spine that were supposedly “out” and in need of adjustment?

That’s a reliability study.

Reliability studies are awesome: although the concept is obscure to most people, they are accessible and interesting, easy for anyone to understand, and very persuasive. Evidence of unreliable diagnosis can make further debate pointless. If chiropractors can’t agree on where subluxations are in the same patient — and some studies have shown that they can’t1 — then the debate about whether or not subluxations actually exist gets less interesting. A reliability study with a negative result doesn’t necessarily prove anything,2 but they are strongly suggestive, and can be a handy shortcut for consumers. Who wants a diagnosis that will probably be contradicted by each of five other therapists? No one, that’s who.

What if five different chiropractors all looked at you, but each diagnosed different spots in your spine that were supposedly “out” & in need of adjustment?

Reliability jargon

In reliability science, we talk about “raters.” A rater is a judge … of anything. One who rates. The person who makes the call. All health care professionals are raters whenever they are assessing and diagnosing.

Reliability studies are studies of “inter-rater” reliability, agreement, or concordance. In other words, how much do raters agree with each other? Not in a meeting about it later, but on their own. Do they come to similar conclusions when they assess the same patient independently?

There are formulas that express reliability as a score, such as a “concordance correlation coefficient.” For the non-statistician, that boils down to: how often are health care professionals going to come to the same or similar conclusions about the same patient? Every time? Half the time? One in ten?


Gunshot wound diagnosis is super reliable

This reliability thing is not subtle: you don’t need a second opinion for a gunshot wound. Ten out of ten doctors will agree: “Yep, that’s definitely a gunshot wound!” Well, almost.3

That’s high inter-rater reliability.

Lots of diagnostic challenges are much harder, of course. Humans are complex. It’s not always obvious what’s wrong with them. This is why you need second and third opinions sometimes. And it’s perfectly fine to have low reliability regarding difficult medical situations. Patients are pretty forgiving of low diagnostic reliability quickly when professionals are candid about it. All a doctor has to say is, “I’m not sure. I don’t know. Maybe it’s this, and maybe it isn’t.”

What you have to watch out for is low reliability combined with high confidence: the professionals who claim to know, but can’t agree with each other when tested. Unfortunately, this is a common pattern in alternative medicine. And it is a strong argument that it’s actually alternative medicine practitioners who are “arrogant,” not doctors.

Ten out of ten doctors will agree: “Yep, that’s definitely a gunshot wound!”

Stomach gurgle interpretation is not reliable

True story: a patient of mine, back in the day, a young woman with chronic neck pain and nausea, went to a “body work” clinic for her problem. Three deeply spiritual massage therapists hovered over her for three hours, charging $100/hour — each, for a total of $900 for the session — and provided (among some other things) a running commentary/translation of what her stomach was “trying to tell her” about her psychological issues.

True story: my eyes rolled out their sockets. And my patient was absolutely horrified.

Obviously, if she’d gone to another gurgle-interpreter down the road, her gastric messages would have been interpreted differently.

That’s low inter-rater reliability.

9 examples of unreliable assessment in musculoskeletal medicine

There are numerous common diagnoses and theories of pain that suffer from lousy inter-rater reliability. Here are some good examples:

  1. Craniosacral therapists allege that they can detect subtle defects in the circulation of your cerebrospinal fluid, but reliability testing shows that they can’t agree with each other about it.45
  2. Many kinds of therapists believe that the alignment of the forefoot is important, but a reliability study showed that “the commonplace method of visually rating forefoot frontal plane deformities is unreliable and of questionable clinical value.”6 I know one of these foot alignment kooks: he literally believes that “all pain” is caused by a single joint in the foot, and that he can fix it every time. Again, there’s that arrogance.7
  3. Many therapists, naturopathic physicians and other self-proclaimed healers use a kind of testing called “applied kinesiology” which uses a simple strength test as the primary diagnostic tool for all problems, but a simple study showed that practitioners efforts were “not more useful than random guessing” — not just poor reliabiliy, but zero reliability.
  4. Motion palpation is used to identify patients who might benefit from spinal manipulative therapy. This is particularly common in chiropractic offices. Unfortunately, trying to detect spinal joint stiffness and/or pain using “motion palpation” didn’t go well in a 2015 test: the examiners found different “problems” in the same patients.8
  5. The Functional Movement Screen™ (FMS) is a set of physical tests of coordination and strength. Although intended to be just a trouble-detection system, in practice it’s popularity is substantially based on reaching beyond that purpose to actually diagnose biomechanical problems and justify correctional training or treatment. Unfortunately, not only has FMS failed to reliably forecast injuries, but all FMS predictions may be “a product of specious grading.”9
  6. Traditional Chinese medicine acupuncturists couldn’t agree at all on what was wrong with patients who had low back pain. In six cases evaluated by six practitioners on the same day, twenty diagnoses were used at least once — which is pretty excessive. Even an “inexact science” should probably be a little more exact than that.10
  7. “Trigger points [muscle knots] are promoted as an important cause of musculoskeletal pain,” but after several decades we still don’t know whether or not professionals can reliably diagnose trigger points.1112 The data is inconclusive, and there are reasons to doubt it will end well. They almost certainly can’t without adequate training.
  8. “Core instability” is an extremely popular thing to blame for back pain. However, you can’t very well treat core instability if you can’t diagnose it as a problem in the first place. A test of core stability testing was a clear failure: “6 clinical core stability tests are not reliable when a 4-point visual scoring assessment is used.”13 This is a bit problematic for core dogma.
  9. Ever been told your shoulder blade was misbehaving? “Shoulder dyskinesis” is fancy talk (elaborate parlance!) for “bad shoulder movement.” Unfortunately, therapists cannot agree on these diagnoses, and a 2013 review in the British Journal of Sports Medicine condemned them: “no physical examination test of the scapula was found to be useful in differentially diagnosing pathologies of the shoulder.”14
  10. Surprisingly, professionals often seem to have trouble deciding whether a given foot has a flat arch or a high arch.1516

And so on and on. Over the months and years, I’ll add other nice examples to this list as they occur to me. For contrast, many diagnostic and testing procedures are reliable, such as testing range of motion in people with frozen shoulder.17

An odd example: tuning-fork diagnosis!

Supposedly a humming tuning fork applied to a stress fracture will make it ache. This analysis of studies18 since the 1950s tried to determine if tuning forks (and ultrasound) are actually useful in finding lower-limb stress fractures. Neither technique was found to be accurate. “it is recommended that radiological imaging should continue to be used” instead. Fortunately (for the sake of the elegant quirkiness of the idea), they aren’t saying that a tuning fork actually can’t work … just that’s it not reliable for confirmation, which kind of a “well, duh” conclusion.

About Paul Ingraham

Headshot of Paul Ingraham, short hair, neat beard, suit jacket.

I am a science writer, former massage therapist, and I was the assistant editor at for several years. I have had my share of injuries and pain challenges as a runner and ultimate player. My wife and I live in downtown Vancouver, Canada. See my full bio and qualifications, or my blog, Writerly. You might run into me on Facebook or Twitter.

What’s new in this article?

Added an example of diagnosis that is reliable.

Added motion palpation. Added a citation about detecting craniosacral therapy. Started update logging.

Many unlogged updates.


  1. French SD, Green S, Forbes A. Reliability of chiropractic methods commonly used to detect manipulable lesions in patients with chronic low-back pain. J Manipulative Physiol Ther. 2000 May;23(4):231–8. PubMed #10820295.

    I do enjoy reliability studies, and this is one of my favourites. Three chiropractors were given twenty patients with chronic low back pain to assess, using a complete range of common chiropractic diagnostic techniques, the works. Incredibly, assessing only a handful of lumbar joints, the chiropractors agreed which joints needed adjustment only about a quarter of the time (just barely better than guessing). That’s an oversimplification, but true in spirit: they couldn’t agree on much, and researchers concluded that all of these chiropractic diagnostic procedures “should not be seen … to provide reliable information concerning where to direct a manipulative procedure.”

  2. The problem may be with the design of the test, or the training and skill of those tested, rather than with what they are looking for. BACK TO TEXT
  3. In the first chapter of his superb book, Complications: A surgeon's notes on an imperfect science, surgeon Atul Gawande tells a fascinating story about a bullet that got lost. Some kid got shot in the butt. There was a classic entry wound. Internal bleeding. No exit wound. It was a critical situation, and they opened him up to get the bullet out, but … no bullet was ever found. Was he shot, or wasn’t he? It was never explained. BACK TO TEXT
  4. Wirth-Pattullo V, Hayes KW. Interrater reliability of craniosacral rate measurements and their relationship with subjects' and examiners' heart and respiratory rate measurements. Phys Ther. 1994 Oct;74(10):908–16; discussion 917–20. PubMed #8090842.

    The first test of the claim that craniosacral therapists are able to palpate change in cyclical movements of the cranium. They concluded that “therapists were not able to measure it reliably,” and that “measurement error may be sufficiently large to render many clinical decisions potentially erroneous.” They also questioned the existence of craniosacral motion and suggested that CST practitioner might be imagining such motion. This prompted extensive and emphatic rebuttal from Upledger.

  5. Moran RW, Gibbons P. Intraexaminer and interexaminer reliability for palpation of the cranial rhythmic impulse at the head and sacrum. J Manipulative Physiol Ther. 2001 Mar-Apr;24(3):183–190. PubMed #11313614.

    “Palpation of a cranial rhythmic impulse (CRI) is a fundamental clinical skill used in diagnosis and treatment” in craniosacral therapy. So, researchers compared the diagnostics methods of “two registered osteopaths, both with postgraduate training in diagnosis and treatment, using cranial techniques, palpated 11 normal healthy subjects.” Unfortunately, they couldn’t agree on much: “interexaminer reliability for simultaneous palpation at the head and the sacrum was poor to nonexistent.” Emphasis mine.

  6. Cornwall MW, McPoil TG, Fishco WD, et al. Reliability of visual measurement of forefoot alignment. Foot Ankle Int. 2004 Oct;25(10):745–8. PubMed #15566707.

    This is one of those fun studies that catches clinicians in their inability to come up with the same assessment of a structural problem. Three doctors were asked to “rate forefoot alignment,” but they didn’t agree. From the abstract: “… the commonplace method of visually rating forefoot frontal plane deformities is unreliable and of questionable clinical value.”

  7. The Not-So-Humble Healer: Cocky theories about the cause of pain are waaaay too common in massage, chiropractic, and physical therapy BACK TO TEXT
  8. Walker BF, Koppenhaver SL, Stomski NJ, Hebert JJ. Interrater Reliability of Motion Palpation in the Thoracic Spine. Evidence-Based Complementary and Alternative Medicine. 2015;2015:6. PubMed #26170883. PainSci #54242.

    Two examiners, using standard methods of motion palpation of the thoracic spine, could not agree at all well on the location of joint stiffness or pain in 25 patients. Simplifying the diagnostic challenge did not improve matters. Therefore, “The results for interrater reliability were poor for motion restriction and pain.” This does not bode well for manual therapists who use motion palpation to identify patients who might benefit from spinal manipulative therapy.

    The study only used two examiners, which might be a serious flaw. More raters would certainly be better. Nevertheless, even a small data sample can produce meaningful information if the effect size is robust enough (see It's the effect size, stupid), which it probably is here. Even just two examiners should generate similar results, unless someone is grossly incompetent. If they differ greatly, more examiners probably isn’t going to change that.

  9. Whiteside D, Deneweth JM, Pohorence MA, et al. Grading the Functional Movement Screen™: A Comparison of Manual (Real-Time) and Objective Methods. J Strength Cond Res. 2014 Aug. PubMed #25162646. The results are hardly surprising, since FMS fails to take into account “several factors that contribute to musculoskeletal injury.” These concerns must be addressed “before the FMS can be considered a reliable injury screening tool.” BACK TO TEXT
  10. Hogeboom CJ, Sherman KJ, Cherkin DC. Variation in diagnosis and treatment of chronic low back pain by traditional Chinese medicine acupuncturists. Complement Ther Med. 2001 Sep;9(3):154–66. PubMed #11926429.

    Diagnosis by acupuncturists may be unreliable. In this study, “six TCM acupuncturists evaluated the same six patients on the same day” and found that “consistency across acupuncturists regarding diagnostic details and other acupoints was poor.” The study concludes: “TCM diagnoses and treatment recommendations for specific patients with chronic low back pain vary widely across practitioners.”

  11. Lucas N, Macaskill P, Irwig L, Moran R, Bogduk N. Reliability of physical examination for diagnosis of myofascial trigger points: a systematic review of the literature. Clinical Journal of Pain. 2009 Jan;25(1):80–9. PubMed #19158550.

    This paper is a survey of the state of the art of trigger point diagnosis: can therapists be trusted to find trigger points? What science has been done so far? It’s a confusing mess, unfortunately. This paper explains that past research has not “reported the reliability of trigger point diagnosis according to the currently proposed criteria.” The authors also explain that “there is no accepted reference standard for the diagnosis of trigger points, and data on the reliability of physical examination for trigger points are conflicting.” Given these conditions, it’s hardly surprising that the conclusion of the study was disappointing: “Physical examination cannot currently be recommended as a reliable test for the diagnosis of trigger points.”

    This is essentially the same conclusion as a review the year before by Myburgh et al.

  12. Gerdesmeyer L, Frey C, Vester J, et al. Radial extracorporeal shock wave therapy is safe and effective in the treatment of chronic recalcitrant plantar fasciitis: results of a confirmatory randomized placebo-controlled multicenter study. Am J Sports Med. 2008 Nov;36(11):2100–2109. PubMed #18832341. PainSci #56185.

    This overconfidently titled paper essentially declares that there is no longer any controversy about ESWT for plantar fasciitis. However, my confidence in their conclusions is suppressed by the fact that the researchers are on the payroll of a company that makes ESWT devices, and the entire study was funded by that company. As always, conflicts of interest are not necessarily a deal-breaker, but they can be, and this one seems particularly strong.

  13. Weir A, Darby J, Inklaar H, et al. Core stability: inter- and intraobserver reliability of 6 clinical tests. Clin J Sport Med. 2010 Jan;20(1):34–8. PubMed #20051732. BACK TO TEXT
  14. Wright AA, Wassinger CA, Frank M, Michener LA, Hegedus EJ. Diagnostic accuracy of scapular physical examination tests for shoulder disorders: a systematic review. Br J Sports Med. 2013 Sep;47(14):886–92. PubMed #23080313. BACK TO TEXT
  15. Sensiba PR, Coffey MJ, Williams NE, Mariscalco M, Laughlin RT. Inter- and intraobserver reliability in the radiographic evaluation of adult flatfoot deformity. Foot Ankle Int. 2010 Feb;31(2):141–5. PubMed #20132751. Although not terrible, even x-rays of the same foot get judged differently: just fine with some measures, merely okay for others. However, that’s radioloists evaluated x-rays: you would hope it would be fairly reliable. The problem is with some kinds of clinicians (see next note). BACK TO TEXT
  16. This is a bit of a cheat: I don’t have a proper reliability study to back this up, just a professional story: when I worked as massage therapist, it was common for people to come into my office with so-called “flat” feet, convinced by a previous massage therapist (or chiropractor) that they “have no arch left” (or some other motivating hyperbole) … when in fact I could still easily get my finger under their arch up to the first knuckle. That’s something that you simply can’t do on someone who really has flat feet. Similarly, though not so common, I have often seen people accused by another professional of having high arches, when in fact they look nothing like it to me. So take such diagnoses with a grain of salt. BACK TO TEXT
  17. Tveitå EK, Ekeberg OM, Juel NG, Bautz-Holter E. Range of shoulder motion in patients with adhesive capsulitis; intra-tester reproducibility is acceptable for group comparisons. BMC Musculoskelet Disord. 2008;9:49. PubMed #18405388. PainSci #53284.

    Diagnostic reliability of range of shoulder motion in patients with frozen shoulder is “acceptable.”

  18. Schneiders AG, Sullivan SJ, Hendrick PA, et al. The Ability of Clinical Tests to Diagnose Stress Fractures: A Systematic Review and Meta-analysis. J Orthop Sports Phys Ther. 2012;42(9):760–71. PubMed #22813530. BACK TO TEXT