Facial Gesture Recognition with Human Computer Interaction for Physically Impaired

Main Article Content

Anjusha Pimpalshende, Chalumuru Suresh, Archana Potnurwar, Latika Pinjarkar, Vrushali Bongirwar, Ranjit Dhunde

Abstract

Modern technologies have succeeded in reducing human effort and risks in doing many activities. As physical disability is a barrier for computer interaction based on electronic devices, modern advancements like human-computer interaction (HCI) have proven to be a dynamic field of study in the last few decades which is a major milestone to uphold the lives of the physically impaired in terms of utilizing technology. In this study we projected a method that provides assistance to the physically impaired people for communicating with these techniques. We accomplished this task by considering the facial gestures, assuming that the disability of the physically challenged having no limbs. Distinguished facial gestures, like eye-blinks, mouth open/close, and head position, are captured and fed as input by a camera as a video stream to the operating system. For example, consider a television where a normal human being can perform various operations in it by a remote control with his/her arms but imagine a physically challenged person who cannot interact with the television without human intervention. We used HCI in the form of facial gestures to establish an interaction with the television and physically impaired participants. We incorporated eye-blink as a gesture to switch on/off, mouth open/close to intimate the Operating system to start/stop taking input commands, and the Position of the head is monitored as the input command and is used to alter the volume of the television. An interactive model that recognizes all of these facial gestures provides a great support for the physically impaired people to interact with the systems.

Article Details

Section
Articles