Adil Khan 9 months ago
AdiKhanOfficial #FYP Ideas

AI Based Sign Language Interpreter For Disabled Persons

Disabled persons often face communication problems while interacting with general public. Therefore, disabled persons use sign language to communicate with others. Main motivation behind this project is to design an artificial intelligence (AI) based robot that can work as interpreter between the di

Project Title

AI Based Sign Language Interpreter For Disabled Persons

Project Area of Specialization

Artificial Intelligence

Project Summary

Disabled persons often face communication problems while interacting with general public. Therefore, disabled persons use sign language to communicate with others. Main motivation behind this project is to design an artificial intelligence (AI) based robot that can work as interpreter between the disabled persons and the general public. It will use camera to acquire visual signals, interpret the symbols of American sign language using machine learning and convert it to English language text. Text will be further synthesized to speech. Proposed design will play the synthesized speech on the speaker and display it in text form on the screen. Real-time input from the camera will be acquired and fed into the Raspberry Pi.AI based sign language recognition will be performed to convert predicted signs to text and speech. Proposed interpreter will work in two modes. In one mode, blind and deaf persons will use proposed prototype to interpret the message from the others. In the second mode, disabled persons can use the proposed prototype to convey messages to the general public. Proposed project can be used in public places and academic institutions to facilitate disabled persons and minimize communication barriers.

Project Objectives

Major objectives of the proposed project are following:

  • A professional design that can help the disabled persons- in communication.
  • Cost-effective standalone system for sign language interpreter.
  • Acquisition of visual signals and preprocessing to get hand landmarks using Media-pipe library.
  • Using machine learning to accurately classify the symbols of American Sign Language.
  • Design of a functional prototype that can assist disabled persons in multiple modes of operation

Project Implementation Method

The proposed design consists of a robot that takes the input from a camera device to take the hand gestures as input so that the prediction of the sign language can easily be performed accordingly. The robot will work for the specific range to detect the sign and after the completion of the predicted work will be displayed on the screen, and audio will be played on the speaker. The main blocks of the proposed design methodology haveshown in the block diagram and are described below:

Camera

The camera will be interfaced with the Raspberry Pi to take the input and can perform further tasks. Input from the camera will be in the form of hand gestures in the form of the frame where the hand will be segmented out afterward by ignoring the background to get the features of the hand. After acquiring the image, the image is flipped before passing it to the embedded system, and then it is fed to Raspberry Pi for pre-processing, segmentation, and for the implementation of the artificial intelligence algorithm.

Embedded System

The Embedded system is responsible for acquiring the visual inputs signals from the user for the processing of AI models and making predictions according to the respective gesture. Raspberry Pi will be used for processing of visual data and AI based prediction. Embedded System also interfaces the display screen and speaker for displaying the gesture in the form of text and play it back as speech for blind people.

Data Acquisition and Pre-Processing

Visual images of sign symbols are acquired using camera and preprocessing is performed on it. First, the acquired images are flipped to handle both left and right hand symbols. Next, segmentation is performed to segment out the user hand. Also, there is an area size threshold to limit the distance from the camera for precise results. During hand segmentation, hand gestures is isolated from the entire image by subtracting the background from the frame.

AI-Based Sign Recognition

After having the input from the camera, the hand will be segmented out, and then by using the sign language dataset to compare the relevant results in parallel with the input. Classifier models are used in such a way to train the dataset for getting some better results at the output. Once the model is trained, it acquires the visual inputs, compares this input with the data set, and predicts accordingly. 

Speech-Synthesis And Display

For assisting the disabled persons when the prediction is completed and results will be by the requirement of the project prototype. Two different modes of output are selected to present our results. The output will be shown as a display for the deaf community.Output of the KNN model will be check by the auto-correct library if the output word need to be corrected or not. If it needs to be corrected, auto-correct library will have corrected the word and then it will be converted into speech.

Figure 1: Block Diagram of the Proposed Sign Language Interpreter

Benefits of the Project

Major benefits of the project are:

  • An AI based smart interpreter that can be used to assist disabled persons.
  • Using speech synthesis and graphical display, it can be used both by blind and deaf.
  • A cost-effective standalone solution that can be used in market places, shopping malls and academic institutions.
  • It can also be used during training of disabled persons in schools for special people.

Technical Details of Final Deliverable

The technical details and final deliverable of the project are given as follow:

  • Acquisition of visual signals and preprocessing to get hand landmarks using Mediapipe library.
  • Artificial Intelligence based real-time prediction of American Sign Language with auto-correct capability.
  • Conversion of sign language to text and speech to assist the deaf and blind individuals.
  • An embedded system based standalone interpreter that can be used in the field and public places.

Final Deliverable of the Project

HW/SW integrated system

Core Industry

Health

Other Industries

IT

Core Technology

Artificial Intelligence(AI)

Other Technologies

Robotics

Sustainable Development Goals

Good Health and Well-Being for People

Required Resources

Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Raspberry Pi Equipment12500025000
Raspberry Pi Camera Equipment130003000
Speaker Equipment120002000
Monitor Equipment11000010000
Robotic Structure Equipment11500015000
keyboard Equipment120002000
Mouse Equipment120002000
Miscellaneous Miscellaneous 11000010000
Total in (Rs) 69000
If you need this project, please contact me on contact@adikhanofficial.com
Design with Fabrication and control implementation of lower limb Exosk...

The aim of this project is to provide locomotion rehabilitation to individuals with w...

1675638330.png
Adil Khan
9 months ago
Smart office chair

The smart chair is based on raspberry pi,which controls the full operations of a chair. &n...

1675638330.png
Adil Khan
9 months ago
Child Security System

: The role of   Tracker device is on telling the current location of the child w...

1675638330.png
Adil Khan
9 months ago
Three dimensional vision autonomous quadcopter

Autonomous drones (Miniature aerial vehicles/MAV) in recent years have become an essential...

1675638330.png
Adil Khan
9 months ago
Smart Gym Android Application

By the grace of  Allah Almighty we are making Smart Gym Android Application Which wil...

1675638330.png
Adil Khan
9 months ago