PainScience.com Sensible advice for aches, pains & injuries
 
 
bibliography * The PainScience Bibliography contains plain language summaries of thousands of scientific papers and others sources, like a specialized blog. This page is about a single scientific paper in the bibliography, Landis 1977.

The measurement of observer agreement for categorical data

updated
Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977 Mar;33(1):159–74. PubMed #843571.
Tags: stats, classics, scientific medicine

PainSci summary of Landis 1977?This page is one of thousands in the PainScience.com bibliography. It is not a general article: it is focused on a single scientific paper, and it may provide only just enough context for the summary to make sense. Links to other papers and more general information are provided at the bottom of the page, as often as possible. ★★★★☆?4-star ratings are for bigger/better studies and reviews published in more prestigious journals, with only quibbles. Ratings are a highly subjective opinion, and subject to revision at any time. If you think this paper has been incorrectly rated, please let me know.

Landis and Koch suggested labels for ranges of https://en.wikipedia.org/wiki/Cohen%27s_kappa values, describing 𝛋 = 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect . These labels were just expert opinion, and are controversial, but have been widely cited and used ever since, because they are imprecise enough to be “good enough” for many purposes.

original abstractAbstracts here may not perfectly match originals, for a variety of technical and practical reasons. Some abstacts are truncated for my purposes here, if they are particularly long-winded and unhelpful. I occasionally add clarifying notes. And I make some minor corrections.

This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.

related content

These two articles on PainScience.com cite Landis 1977 as a source:


This page is part of the PainScience BIBLIOGRAPHY, which contains plain language summaries of thousands of scientific papers & others sources. It’s like a highly specialized blog. A few highlights: