Sign language is a form of manual communication which has developed as an alternative to speech amongst the deaf and vocally impaired. Although many deaf people can speak clearly and can use skills such as lip-reading when communicating with hearing people, such methods of communication are generally inappropriate for communication within the Deaf community. Therefore the hands have become the primary means of communication within these communities. The hands are also widely utilized during communication between the vocal community, with gestures often used to augment speech. However such gestures bear very little similarity to the signs that make up sign language. First, these gestures serve only an auxiliary role, rather than being the primary focus of communication as they are in signing. Second such gestures have no defined meaning but instead, are interpreted in the context of the accompanying speech. In contrast, the hand gestures used in sign language are highly formalized, with each gesture having a defined meaning, in much the same manner as the spoken or written word. This allows the construction of sign-language dictionaries in which each sign of the language is equated to one or more words in a spoken language. Hence a sign language consists of a vocabulary of signs in exactly the same way as a spoken language consists of a vocabulary of words. In this design the neural networks identification and tracking to translate the sign language to text format programmed using Matlab. Introduction of Point of Interest (POI) and track point provides novelty and reduces the storage memory requirement.
Reference Paper: Real-time Sign Language Recognition based on Neural Network Architecture
Author’s Name: Priyanka Mekala, Ying Gao, Jeffrey Fan, and Asad Davari
Source: IEEE

Request source code for academic purpose, fill REQUEST FORM or contact +91 7904568456 by WhatsApp or, fee applicable.

SIMULATION VIDEO DEMO