American Sign language Recognition using Deep Learning

Communication plays a vital role in day-to-day life but consider a scenario in which two people are unable to communicate with one another because one of them does not comprehend what the other person is attempting to say. Most of the deaf-mute community encounter this when conversing with ordinary...

Full description

Saved in:
Bibliographic Details
Published in:2023 7th International Conference on Computing Methodologies and Communication (ICCMC) pp. 151 - 155
Main Authors: Puchakayala, Anusha, Nalla, Srivarshini, K, Pranathi
Format: Conference Proceeding
Language:English
Published: IEEE 23-02-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Communication plays a vital role in day-to-day life but consider a scenario in which two people are unable to communicate with one another because one of them does not comprehend what the other person is attempting to say. Most of the deaf-mute community encounter this when conversing with ordinary folks. As sign language is used by persons with impairments, normal people don'tknow or lacks understanding of it. This communication gap must be bridged. Therefore, a model has been developed to assist normal people and enable deaf-mute individuals to communicate with one another. One such model is the sign language detection system, which uses a deep learning strategy to identify American Sign Language (ASL) gestures and output the corresponding alphabet in text format. A CNN model and YOLOv5 model were built and compared against each other. YOLO model has produced an accuracy of 84.96%, whereas CNN model has produced an accuracy of 80.59%
DOI:10.1109/ICCMC56507.2023.10084015