Agreement of functional independence measure item scores in patients transferred from one rehabilitation setting to another
Classification and payment systems that incorporate a functional measure used in routine clinical practice can only be as accurate as the underlying functional measure. The test-retest reliability in clinical practice of the individual item scores of the Functional Independence Measure (FIM), a func...
Saved in:
Published in: | European journal of physical and rehabilitation medicine Vol. 45; no. 4; p. 479 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Italy
01-12-2009
|
Subjects: | |
Online Access: | Get more information |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Classification and payment systems that incorporate a functional measure used in routine clinical practice can only be as accurate as the underlying functional measure. The test-retest reliability in clinical practice of the individual item scores of the Functional Independence Measure (FIM), a functional measure used in classification and payment systems has been investigate. The aim of this study was to analyse paired measurements of FIM item scores carried out in routine clinical practice for patients transferred from one Rehabilitation Unit to another, and to determine the interrater reliability using standard measures of agreement and bias.
Patients who were transferred between two rehabilitation units between August 2006 and October 2007 had a FIM scored on discharge from the original unit and a FIM scored on admission to the second unit. A short time between score and re-score reduced the probability of significant functional change. A retrospective analysis was performed.
Paired FIM item scores from143 patients were included in the review. Raw agreement between the two scores for each FIM item was low, with a mean of 54 + 18 pairs (%) matching. The range of difference between scores was wide. Weighted kappa values were generally in the fair agreement range as were the intraclass correlation coefficients. Tests for bias and homogeneity showed that just over half of the items had significant differences in the two sets of scores. Weighted k showed only fair agreement for FIM items. Contributing factors for this could include incomplete FIM training of some staff, insufficient attention to accurate scoring, actual clinical changes, differences between patient performance in different settings, and variation in scoring because of the large number of staff involved in scoring the FIM in the multidisciplinary team within our settings.
Caution needs to be exercised when utilizing the FIM individual item scores in clinical practice as part of clinical or funding classifications or in benchmarking as this study indicates only fair inter-rater reliability of these scores in clinical practice. |
---|---|
ISSN: | 1973-9095 |