Real-Time Sign Language Recognition Using Deep Learning
Main Article Content
Abstract
This project aims to improve communication challenges faced by the deaf and dumb peoplepeople.. Itdoes this by creating a system that turns hand signs into spoken words and written text. The paper describes a real-time sign language recognition system built with deep learning. It uses Convolutional Neural Networks (CNN) to identify handgestures and Google Text-to-Speech (GTTS) to produce voice output. The system captures images of hand signs with a camera, classifies the gestures with CNN, and then uses GTTStoconverttherecognized signs into speech. The system works in real time, making it easier for people with communication difficulties to connect with others. This approach promotes inclusion and helps reduce language and cultural barriers. It makes communication simpler for everyone, no matter their physical abilities.