PainSci summary of Weir 2010?This page is one of thousands in the PainScience.com bibliography. It is not a general article: it is focused on a single scientific paper, and it may provide only just enough context for the summary to make sense. Links to other papers and more general information are provided at the bottom of the page, as often as possible. ★★★☆☆3-star ratings are for typical studies with no more (or less) than the usual common problems. Ratings are a highly subjective opinion, and subject to revision at any time. If you think this paper has been incorrectly rated, please let me know.
You can’t very well treat core instability if you can’t diagnose it as a problem in the first place. This test of testing was a clear failure: “6 clinical core stability tests are not reliable when a 4-point visual scoring assessment is used.” Even if core strength is important (a separate question), this evidence clearly shows that no one should be claiming to be able to detect a problem with core weakness in the first place. A bit problematic for core dogma.
original abstract†Abstracts here may not perfectly match originals, for a variety of technical and practical reasons. Some abstacts are truncated for my purposes here, if they are particularly long-winded and unhelpful. I occasionally add clarifying notes. And I make some minor corrections.
OBJECTIVE: Core stability is a complex concept within sports medicine and is thought to play a role in sports injuries. There is a lack of reliable and valid clinical tests for core stability. The inter- and intraobserver reliability of 6 tests commonly used to assess core stability was determined.
DESIGN: A video of the tests was shown to 6 observers. A second observation took place 5 weeks later with the same observers.
SETTING: Sports medicine department of a hospital.
PARTICIPANTS: Forty male athletes.
ASSESSMENT OF VARIABLES: Core stability was rated as poor, moderate, good, or excellent by each observer for each of the 6 tests.
MAIN OUTCOME MEASURES: Inter- and intraobserver reliability.
RESULTS: The mean score of all tests was 13.4% poor, 33.3% moderate, 40.1% good, and 13.2% excellent. The intraclass correlation coefficients (ICCs 2,1) for the interobserver reliability for frontal, sagittal, and transverse plane evaluation were 0.09, 0.32, and 0.51, respectively. The ICCs for the unilateral squat, the lateral step-down, and the bridge were 0.41, 0.39, and 0.36, respectively. The ICCs for the intraobserver reliability for frontal, sagittal, and transverse plane evaluation were 0.31, 0.40, and 0.55, respectively. The ICCs for the unilateral squat, the lateral step-down, and the bridge were 0.55, 0.49, and 0.21, respectively.
CONCLUSIONS: The 6 clinical core stability tests are not reliable when a 4-point visual scoring assessment is used. Future research on movement evaluation should be focused on more specific rating methods and training for the observers.
One article on PainScience.com cites Weir 2010 as a source:
- PS Is Diagnosis for Pain Problems Reliable? — Reliability science shows that health professionals can’t agree on many popular theories about why you’re in pain
This page is part of the PainScience BIBLIOGRAPHY, which contains plain language summaries of thousands of scientific papers & others sources. It’s like a highly specialized blog. A few highlights:
- Effectiveness of customised foot orthoses for Achilles tendinopathy: a randomised controlled trial. Munteanu 2015 Br J Sports Med.
- A Bayesian model-averaged meta-analysis of the power pose effect with informed and default priors: the case of felt power. Gronau 2017 Comprehensive Results in Social Psychology.
- The neck and headaches. Bogduk 2014 Neurol Clin.
- Agreement of self-reported items and clinically assessed nerve root involvement (or sciatica) in a primary care setting. Konstantinou 2012 Eur Spine J.
- Effect of NSAIDs on Recovery From Acute Skeletal Muscle Injury: A Systematic Review and Meta-analysis. Morelli 2017 Am J Sports Med.