XAI-FR: Explainable AI-Based Face Recognition Using Deep Neural Networks

Face Recognition aims at identifying or confirming an individual’s identity in a still image or video. Towards this end, machine learning and deep learning techniques have been successfully employed for face recognition. However, the response of the face recognition system often remains mysterious t...

Full description

Saved in:
Bibliographic Details
Published in:Wireless personal communications Vol. 129; no. 1; pp. 663 - 680
Main Authors: Rajpal, Ankit, Sehra, Khushwant, Bagri, Rashika, Sikka, Pooja
Format: Journal Article
Language:English
Published: New York Springer US 2023
Springer Nature B.V
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Face Recognition aims at identifying or confirming an individual’s identity in a still image or video. Towards this end, machine learning and deep learning techniques have been successfully employed for face recognition. However, the response of the face recognition system often remains mysterious to the end-user. This paper aims to fill this gap by letting an end user know which features of the face has the model relied upon in recognizing a subject’s face. In this context, we evaluate the interpretability of several face recognizers employing deep neural networks namely, LeNet-5, AlexNet, Inception-V3, and VGG16. For this purpose, a recently proposed explainable AI tool–Local Interpretable Model-Agnostic Explanations (LIME) is used. Benchmark datasets such as Yale, AT &T dataset, and Labeled Faces in the Wild (LFW) are utilized for this purpose. We are able to demonstrate that LIME indeed marks the features that are visually significant features for face recognition.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0929-6212
1572-834X
DOI:10.1007/s11277-022-10127-z