Menu

B

B.D.Jadhav, Nipun Munot, Madhura Hambarde and Jueli Ashtikar come out with the project title Hand Gesture Recognition to Speech Conversion in Regional Language on February 2015 to develop a device that can translate the hand gestures of a deaf-mute person or sign language into speech. Voice playback IC is used in this project to give real time speech output in regional language and LCD module to display text in English. 9
In 2015, Christelle Nasrany and Riwa Bou Abdou with Abdallah Kassem and Mustapha Hamad has design A Sign to Letter and Voice Converter (S2LV) to eliminate a communication barrier between deaf or mute people and normal people. This device is design to detect the gestures of the hand and then translate the signs to letters and voice. It is using smartphone to generate a voice. Accelerometers, microcontroller, smartphone, android application, Bluetooth, Analog multiplexer and digital processor have been use in this project. The glove is used in order to translate the 26 letters of alphabet. Microcontroller will read the hand gestures, analyse the signs language and then using Android application to generate letters and voice. The disadvantage of this device is it is only translate a 26 letters of alphabet. So, the mute person cannot convey their though or feeling to person around them. 10
Voice for The Mute is design by Amiya Kumar Tripathy, Dipti Jadhav, Steffi A. Barreto, Daphne Rasquinha, Sonia S. Mathew in 2015. The objectives of this project basically to help the hearing impaired and speaking impaired to communicate during their daily lives. This Voice for Mute is aims to develop a system that take real time images and convert them to speech and all the limitation is observed by 2D system that will be considering only fingering spelling in the system. This device is using a webcam for the input and processing of the signs will be done using Microsoft Visual Studio as an IDE and OpenCv modules. Image compression, Image matching, Text-to-speech, Human computer interaction (HCI), Computer vision is use to implement this project. This device is not a wearable device and cannot stand alone. It is also need to be static to capture an image for the sign language conversion. 11