Evaluation of the Neer System of Classification of Proximal Humeral Fractures with Computerized Tomographic Scans and Plain Radiographs
The intraobserver reliability and inter-observer reproducibility of the Neer classification system were assessed on the basis of the plain radiographs and computerized tomographic scans of twenty fractures of the proximal part of the humerus. To determine if the observers had difficulty agreeing onl...
Saved in:
Published in: | Journal of bone and joint surgery. American volume Vol. 78; no. 9; pp. 1371 - 5 |
---|---|
Main Authors: | , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Boston, MA
Copyright by The Journal of Bone and Joint Surgery, Incorporated
01-09-1996
Journal of Bone and Joint Surgery Incorporated Journal of Bone and Joint Surgery AMERICAN VOLUME |
Edition: | American volume |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The intraobserver reliability and inter-observer reproducibility of the Neer classification system were assessed on the basis of the plain radiographs and computerized tomographic scans of twenty fractures of the proximal part of the humerus. To determine if the observers had difficulty agreeing only about the degree of displacement or angulation (but could determine which segments were fractured), a modified system (in which fracture lines were considered but displacement was not) also was assessed. Finally, the observers were asked to recommend a treatment for the fracture, and the reliability and re-producibility of that decision were measured. The radiographs and computerized tomographic scans were viewed on two occasions by four observers, including two residents in their fifth year of postgraduate study and two fellowship-trained shoulder surgeons. Kappa coefficients then were calculated. The mean kappa coefficient for intraobserver reliability was 0.64 when the fractures were assessed with radiographs alone, 0.72 when they were assessed with radiographs and computerized tomographic scans, 0.68 when they were classified according to the modified system in which displacement and angulation were not considered, and 0.84 for treatment recommendations; the mean kappa coefficients for interobserver reproducibility were 0.52, 0.50, 0.56, and 0.65, respectively. The interobserver reproducibility of the responses of the attending surgeons regarding diagnosis and treatment did not change when the fractures were classified with use of computerized tomographic scans in addition to radiographs or with use of the modified system in which displacement and angulation were not considered; the mean kappa coefficient was 0.64 for all such comparisons. Over-all, the addition of computerized tomographic scans was associated with a slight increase in intraobserver reliability but no increase in interobserver reproducibility. The classification of fractures of the shoulder remains difficult because even experts cannot uniformly agree about which fragments are fractured. Because of this underlying difficulty, optimum patient care might require the development of new imaging modalities and not necessarily new classification systems. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0021-9355 1535-1386 |
DOI: | 10.2106/00004623-199609000-00012 |