Homogeneity of kappa statistics in multiple samples

The measurement of intra-observer agreement when the data are categorical has been the subject of several investigators since Cohen first proposed the kappa ( κ) as a chance-corrected coefficient of agreement for nominal scales. Subsequent procedures have been developed to assess the agreement of se...

Full description

Saved in:
Bibliographic Details
Published in:Computer methods and programs in biomedicine Vol. 63; no. 1; pp. 43 - 46
Main Author: Reed, James F
Format: Journal Article
Language:English
Published: Shannon Elsevier Ireland Ltd 01-08-2000
Elsevier Science
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The measurement of intra-observer agreement when the data are categorical has been the subject of several investigators since Cohen first proposed the kappa ( κ) as a chance-corrected coefficient of agreement for nominal scales. Subsequent procedures have been developed to assess the agreement of several raters using a dichotomous classification scheme, assess majority agreement among several raters using a polytomous classification scheme, and the use of κ as an indicator of the quality of a measurement. Further developments include inference procedures for testing the homogeneity of k≥2 independent kappa statistics. An executable FORTRAN code for testing the homogeneity of kappa statistics ( κ h ) across multiple sites or studies is given. The FORTRAN program listing and/or executable programs are available from the author on request.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:0169-2607
1872-7565
DOI:10.1016/S0169-2607(00)00074-2