Eigenspace-based fall detection and activity recognition from motion templates and machine learning

Automatic recognition of anomalous human activities and falls in an indoor setting from video sequences could be an enabling technology for low-cost, home-based health care systems. Detection systems based upon intelligent computer vision software can greatly reduce the costs and inconveniences asso...

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications Vol. 39; no. 5; pp. 5935 - 5945
Main Authors: Olivieri, David Nicholas, Gómez Conde, Iván, Vila Sobrino, Xosé Antón
Format: Journal Article
Language:English
Published: Elsevier Ltd 01-04-2012
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Automatic recognition of anomalous human activities and falls in an indoor setting from video sequences could be an enabling technology for low-cost, home-based health care systems. Detection systems based upon intelligent computer vision software can greatly reduce the costs and inconveniences associated with sensor based systems. In this paper, we propose such a software based upon a spatio-temporal motion representation, called Motion Vector Flow Instance (MVFI) templates, that capture relevant velocity information by extracting the dense optical flow from video sequences of human actions. Automatic recognition is achieved by first projecting each human action video sequence, consisting of approximately 100 images, into a canonical eigenspace, and then performing supervised learning to train multiple actions from a large video database. We show that our representation together with a canonical transformation with PCA and LDA of image sequences provides excellent action discrimination. We also demonstrate that by including both the magnitude and direction of the velocity into the MVFI, sequences with abrupt velocities, such as falls, can be distinguished from other daily human action with both high accuracy and computational efficiency. As an added benefit, we demonstrate that, once trained, our method for detecting falls is robust and we can attain real-time performance.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2011.11.109