Investigation of 3-D Relational Geometric Features for Kernel-Based 3-D Sign Language Recognition

Extraction and recognition of human gestures in 3D sign language is a challenging task. 3D sign language gestures are a set of hand and finger movements with respect to face, head and body. 3D motion capture technology involves capturing 3D sign gesture videos that are often affected by background,...

Full description

Saved in:
Bibliographic Details
Published in:2019 IEEE International Conference on Intelligent Systems and Green Technology (ICISGT) pp. 31 - 313
Main Authors: Kiran, P. Sasi, Kumar, D. Anil, Kishore, P.V.V., Kumar, E. Kiran, Kumar, M. Teja Kiran, Sastry, A.S.C.S.
Format: Conference Proceeding
Language:English
Published: IEEE 01-06-2019
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Extraction and recognition of human gestures in 3D sign language is a challenging task. 3D sign language gestures are a set of hand and finger movements with respect to face, head and body. 3D motion capture technology involves capturing 3D sign gesture videos that are often affected by background, self-occlusions and lighting. This paper investigates the relation between joints on 3D skeleton. Kernel based methods can be remarkably effective for recognizing 2D and 3D signs. This work explores the potential of global alignment kernels, we use 3D motion capture data for 3D sign language recognition. Accordingly, this paper encodes five 3D relational geometric features (distance, angle, surface, area and perimeter) into global alignment kernel based on similarities between query and database sequence. The proposed framework has been tested on 800 gestures captured by five subject (i.e., 5 × 800 = 4000) sign language dataset and three other publicly available action datasets, namely, CMU, HDM05 and NTU RGB-D. The proposed method outperforms when compared to other state-of-the-art methods on the above datasets.
DOI:10.1109/ICISGT44072.2019.00022