Automatic sign language recognition is a significant achievement in encouraging the correspondence between the hearing people and deaf community. Communication barriers exist for members of deaf community during daily interactions with those who are unable to understand or use sign language. Existin
Sign Language Recognition using Myo Gesture Control Armband
Automatic sign language recognition is a significant achievement in encouraging the correspondence between the hearing people and deaf community. Communication barriers exist for members of deaf community during daily interactions with those who are unable to understand or use sign language. Existing techniques are either intrusive or defenseless to surrounding conditions and client variety. Most of the methodologies are naïve as they only perform isolated word recognition, not sentence level word recognition. In this project, we present” Sign Language Recognition using Myo Gesture Control Armband”, a profound learning-based framework that empowers end-to-end sign language gestures recognition at both word and sentence levels. Our solution uses a wearable sensor-based approach which uses a Myo gesture control armband which sends electromyogram (EMG) data and other inertial signals from components such as accelerometer, gyroscope and magnetometer. We plan on using state-of-the-art machine learning techniques and assessing our solution for at least as many words and sentences as are required for basic daily-life communication.
This project aims to aid hearing-impaired people by making the communication process easier and quicker in public environments such as marketplaces, offices, hospitals, etc. The communication will take place via a mobile app which will be connected to Myo Armband worn by hard-of-hearing people. The armband will send the EMG and IMU signals from the armband and using the trained model the application will translate the gestures to their corresponding word in Pakistan Sign Language (PSL).
The proposed implementation for this project is divided into following sequences.
1- Data Collection Stage
The subject first wears Myo Armband in their right forearm and performs the steps mentioned on the screen to warm up the armband. Once the armband is warmed up, the subject is shown a gesture from Pakistan Sign Language’s video library of gestures. The subject is required to learn the gesture and then perform it 10 times. The signals are stored in two CSV files. One for EMG and other for IMU signals. Two separate files are created since EMG signals are generated at 200 Hz and IMU at 50 Hz due to which both have different number of rows.
2-Data Pro-Processing Stage
The preprocessing stage takes the continuous data as an input and transform approx. hundreds of continuous rows into a long vector of features (I.e., 116 features). The selected pre-processing method is chosen over other more sophisticated techniques because it is more efficient and faster to calculate as compare to image transformation.
3- Label Prediction Stage
The transformed data will now pass through the prediction algorithm. Right now, we are only using Machine Learning algorithms like SVM, Random Forest Classifier, Extreme Gradient Boosting etc. for sign prediction. The predicted label value will now pass to the display screen of the application where user will see the message and respond according to that.
Communication barriers experienced by hearing impaired peoples can result in consequences such as low self-esteem, low socioeconomic status, and social isolation. The majority of such individuals rely on different communication mechanisms to convey their thoughts and to others such as written language and sign language. It can be argued that communicating through writing can be tedious, inefficient, and impersonal. However, sign language is a natural way of communication that is language composed of combinations of gestures and body movements that correspond to specific meanings, or semantics, that’s why in this project, we present a Sign Language Recognition (SLR) system which will be able to translate sign language to spoken language. We leverage a light-weight off-the-shelf wearable device, Myo armband, which can be connected to a smartphone or a laptop via Bluetooth. As such, our system can non-intrusively perform SLR anytime, anyplace. The sensors in Myo Armband can even distinguish subtle finger configurations (hand shapes) and muscle activity pattern.
Final deliverable of this project wii be used by two entities.
1- Hearing-Impaired person.
2- Normal Person
1- HEARING-IMPAIRED PERSON INTERACTION
So for Hearing-Impaired person, the technical details are as follow.
1- The Hearing Impaired first connects the Myo Armband and if connected then makes the gesture using it otherwise the Myo Armband sends the exception to the user.
2- The user if Armband connected sends the data to the Python Script for further processing of the data if and only if the input data shape is consistent with the expected input data shape otherwise the script raises an exception.
3- The Normal person either wants to listen to the Audio or only see the text based on the option he/she selects.
2- NORMAL PERSON INTERACTION
For Normal person, the technical details can be divided into 3 steps.
1- The Normal user at first changes the mode of the Mobile Application to Text-to-Gesture in order for the system to receive the Text as an input and output the gesture to the Hearing-Impaired person.
2- The user now sends the message to the system and the system searches for the gesture corresponding to the message in the Database. If the system found the gesture then it passes on the gesture and the message to the Deaf person otherwise the Normal user receives the mapping error in their screen.
3- Deaf person is provided with a choice of whether he/she wants to see the gesture related to the message the Normal user sent or the text.
In conclusion, the purpose of the project is to let hearing-impaired individual communicate with the normal person through a Myo Armband device strap to the hearing-impaired person forearm and connected with the mobile application of both the users (transmitter and receiver). The hearing-impaired person uses the mobile application to transforms the generated signal data and sends it to the prediction algorithm which in turns sends the corresponding output (gesture value) to the normal user. The normal user will send the message either using microphone or through simply typing the message. The hearing-impaired will see the normal user message in either gesture format or simply in plaintext format.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Myo Gesture Control Armband | Equipment | 1 | 39500 | 39500 |
| Total in (Rs) | 39500 |
Each day multiple people visit public or private organizations to meet some specific perso...
Voice pathology is increasing dramatically, especially due to unhealthy social habits, bei...
Main Objective of our platform is to provide travelers a reliable platform to manage and e...
We propose real-time image-based overloaded passenger vehicles detection using deep learni...
We?re making a project, titled ?Ishtehaar. pk?, it?s a web-based project which uses a web...