Enhancing Sign Language Identification using Key Point-Based Feature Extraction Methods
Main Article Content
Abstract
Only the verbal expression of wants, emotions, and thoughts to others is the purpose of human communication. However, people with disabilities, such as the deaf and the dumb, exist on our world. These people are not able to communicate through speech. Only sign language may be used to communicate between deaf and dumb persons. This work applies various feature extraction approaches to static hand gesture photos, with an emphasis on FAST (Features from Accelerated Segment Test), SIFT (Scale Invariant Feature Transform), and ORB (Oriented FAST and Rotated BRIEF). Combinations like FAST+ORB and FAST+SIFT are also studied to enhance performance and resilience. Experiments show that hybrid techniques, such as FAST+ORB and FAST+SIFT, increase computational efficiency and feature matching accuracy, making them suitable for real-time Sign Language Recognition (SLRecog) applications. The best feature extraction methods for gesture-based human-computer interaction systems will be chosen with the aid of this comparative study.