Automatic facial expression recognition combining texture and shape features from prominent facial regions

Facial expression is one form of communication which being non‐verbal in nature precedes verbal communication in both origin and conception. Most of the existing methods for Automatic Facial Expression Recognition (AFER) are mainly focused on global feature extraction assuming that all facial region...

Full description

Saved in:
Bibliographic Details
Published in:IET image processing Vol. 17; no. 4; pp. 1111 - 1125
Main Authors: Kumar H N, Naveen, Kumar, A Suresh, Prasad M S, Guru, Shah, Mohd Asif
Format: Journal Article
Language:English
Published: Wiley 01-03-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Facial expression is one form of communication which being non‐verbal in nature precedes verbal communication in both origin and conception. Most of the existing methods for Automatic Facial Expression Recognition (AFER) are mainly focused on global feature extraction assuming that all facial regions contribute equal amount of discriminative information to predict the expression class. The detection and localization of facial regions that have significant contribution to expression recognition and extraction of highly discriminative feature distribution from those regions are not fully explored. The key contributions of the proposed work are developing novel feature distribution upon combining the discriminative power of shape and texture feature; determining the contribution of facial regions and identifying the prominent facial regions that hold and highly discriminative information for expression recognition. The shape and texture features taken into consideration are Local Phase Quantization (LPQ), Local Binary Pattern (LBP), and Histogram of Oriented Gradients (HOG). Multiclass Support Vector Machine (MSVM) is used while one versus one classification. The proposed work is implemented on CK+, KDEF, and JAFFE benchmark facial expression datasets. The recognition rate of the proposed work is 94.2% on CK+ and 93.7% on KDEF, which is significantly more than the existing handcrafted feature‐based methods.
ISSN:1751-9659
1751-9667
DOI:10.1049/ipr2.12700