Disabled persons often face communication problems while interacting with general public. Therefore, disabled persons use sign language to communicate with others. Main motivation behind this project is to design an artificial intelligence (AI) based robot that can work as interpreter between the di
AI Based Sign Language Interpreter For Disabled Persons
Disabled persons often face communication problems while interacting with general public. Therefore, disabled persons use sign language to communicate with others. Main motivation behind this project is to design an artificial intelligence (AI) based robot that can work as interpreter between the disabled persons and the general public. It will use camera to acquire visual signals, interpret the symbols of American sign language using machine learning and convert it to English language text. Text will be further synthesized to speech. Proposed design will play the synthesized speech on the speaker and display it in text form on the screen. Real-time input from the camera will be acquired and fed into the Raspberry Pi.AI based sign language recognition will be performed to convert predicted signs to text and speech. Proposed interpreter will work in two modes. In one mode, blind and deaf persons will use proposed prototype to interpret the message from the others. In the second mode, disabled persons can use the proposed prototype to convey messages to the general public. Proposed project can be used in public places and academic institutions to facilitate disabled persons and minimize communication barriers.
Major objectives of the proposed project are following:
The proposed design consists of a robot that takes the input from a camera device to take the hand gestures as input so that the prediction of the sign language can easily be performed accordingly. The robot will work for the specific range to detect the sign and after the completion of the predicted work will be displayed on the screen, and audio will be played on the speaker. The main blocks of the proposed design methodology haveshown in the block diagram and are described below:
The camera will be interfaced with the Raspberry Pi to take the input and can perform further tasks. Input from the camera will be in the form of hand gestures in the form of the frame where the hand will be segmented out afterward by ignoring the background to get the features of the hand. After acquiring the image, the image is flipped before passing it to the embedded system, and then it is fed to Raspberry Pi for pre-processing, segmentation, and for the implementation of the artificial intelligence algorithm.
Embedded System
The Embedded system is responsible for acquiring the visual inputs signals from the user for the processing of AI models and making predictions according to the respective gesture. Raspberry Pi will be used for processing of visual data and AI based prediction. Embedded System also interfaces the display screen and speaker for displaying the gesture in the form of text and play it back as speech for blind people.
Data Acquisition and Pre-Processing
Visual images of sign symbols are acquired using camera and preprocessing is performed on it. First, the acquired images are flipped to handle both left and right hand symbols. Next, segmentation is performed to segment out the user hand. Also, there is an area size threshold to limit the distance from the camera for precise results. During hand segmentation, hand gestures is isolated from the entire image by subtracting the background from the frame.
AI-Based Sign Recognition
After having the input from the camera, the hand will be segmented out, and then by using the sign language dataset to compare the relevant results in parallel with the input. Classifier models are used in such a way to train the dataset for getting some better results at the output. Once the model is trained, it acquires the visual inputs, compares this input with the data set, and predicts accordingly.
Speech-Synthesis And Display
For assisting the disabled persons when the prediction is completed and results will be by the requirement of the project prototype. Two different modes of output are selected to present our results. The output will be shown as a display for the deaf community.Output of the KNN model will be check by the auto-correct library if the output word need to be corrected or not. If it needs to be corrected, auto-correct library will have corrected the word and then it will be converted into speech.

Figure 1: Block Diagram of the Proposed Sign Language Interpreter
Major benefits of the project are:
The technical details and final deliverable of the project are given as follow:
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Raspberry Pi | Equipment | 1 | 25000 | 25000 |
| Raspberry Pi Camera | Equipment | 1 | 3000 | 3000 |
| Speaker | Equipment | 1 | 2000 | 2000 |
| Monitor | Equipment | 1 | 10000 | 10000 |
| Robotic Structure | Equipment | 1 | 15000 | 15000 |
| keyboard | Equipment | 1 | 2000 | 2000 |
| Mouse | Equipment | 1 | 2000 | 2000 |
| Miscellaneous | Miscellaneous | 1 | 10000 | 10000 |
| Total in (Rs) | 69000 |
The aim of this project is to provide locomotion rehabilitation to individuals with w...
The smart chair is based on raspberry pi,which controls the full operations of a chair. &n...
: The role of Tracker device is on telling the current location of the child w...
Autonomous drones (Miniature aerial vehicles/MAV) in recent years have become an essential...
By the grace of Allah Almighty we are making Smart Gym Android Application Which wil...