B Able Kids version

Our project B -Able kid?s version will serve as a portable solution for overcoming many difficulties and will be providing convenience for mute/hearing impaired and autistic children by developing a portable device. The main aim of this project is, to make a bi-directional portable system which will

2025-06-28 16:25:32 - Adil Khan

Project Title

B Able Kids version

Project Area of Specialization Artificial IntelligenceProject Summary

Our project B -Able kid’s version will serve as a portable solution for overcoming many difficulties and will be providing convenience for mute/hearing impaired and autistic children by developing a portable device. The main aim of this project is, to make a bi-directional portable system which will be consisting of a camera, screen, input buttons and microcontroller board (Raspberry Pi) which will be able to understand the real-time sign language provided by the user as input and it will then be utilized in a meaningful sentence/voice note, depending upon the user requirement, as well as to convert the text/speech which will be taken as input by the user, will be converted to its respected sign language and for the autistic kids to learn via this device , English alphabets, counting, rhymes,Urdu and image recognition. Our project will use state-of-the-art techniques which will be used in text/speech recognition, hand gesture and sign language recognition by using NLP, Computer Vision and, we will use Google API which will help in displaying the output in Urdu and English. The techniques examined are properly classified into various stages: data acquisition, pre-processing, segmentation, extraction, and classification of features, where different algorithms are elaborated at each level and their merits are measured. Our goal is to design such an algorithm which minimizes the process time from input to output.

Project Objectives

Our portable intelligent e-learning system will be taking input through hand gesture for the deaf/dumb to communicate with the able as well as in voice/text for the able to communicate with the impaired person and autistic children which will support bi-directional communication. It will also be providing image recognition, will be providing learning environment for autistic children which includes teaching rhymes, English Alphabets (A-Z),Urdu Alphabets, Finger-counting + calculation.

Project Implementation Method

Design phase

By comparing it with other alternative solution, our software will take input in sign language and will generate text-based output as well as voice based at real-time,which was not available in previous years project, so that even those who do not understand sign language will be able to understand the message, impaired person is conveying.

High-Level Design (HLD)

Brief description:
Understand the real-time sign language provided by the user as input and it will then be utilized in a meaningful sentence/voice note, depending upon the user requirement.

Convert the text/speech which will be taken as input by the user, will be converted to its respected sign language.

Low-Level Design(LLD)

Functional logic of themodules

Using OpenCV for computer vision, Deep Learning algorithm (CNN) and Machine
Learning Algorithm (KNN).

Dataset from Kaggle
Complete input and outputs for each module


Text/voice to sign language.

Input: text/ voice Output: image/video

Sign language to text/voice

Input: image/video Output: text/voice

2. Implementation phase

We will be using google translator API and various ML as well as Deep learning Algorithm consisting KNN, CNN NLP Algo’s in our project, as in python we’ll be using ML-libraries like Computer-Vision(OpenCV), Kera’s, TensorFlow, Clearent , Kera will play an important role in data preprocessing and KNN will be used for comparing the input data with the dataset provided. By using Computer-vision, input will be given to the model, in the next step the sign language will be converted to text and finally the text to speech will be implemented to convert the generated text to speech and vice versa.

3. Testing phase

Training it with different background, skin-color, and different hand size through computer vision and with different types of voice to achieve maximum accuracy of the predicted data. Data Preprocessing will be done by using different classifiers and algorithms to reach maximum accuracy level and minimize the processing time by removing outliers, normalization, and cleaning of raw data. The final testing will be done manually and automatically using different tools.

4. Evaluation phase

In this phase, we will evaluate the data through data visualization by using python library i.e.,Matplotlib,which will give us a clear view of residuals(if any)in out model.And will deploy it for further evaluation.

Benefits of the Project

As the focus of our project is to develop a portable device for the deaf/dumb children it will serve as a personal translator and an E-learning platform for them. Our project will be working bi-directionally in Urdu, English, the future scope of our project is to extend the language barrier between Pakistan and other countries and to help autistic children in learning as we will be able to use the device anywhere in the world. The deaf/dumb students can carry this device along with them.

Technical Details of Final Deliverable

Programming Languages

PYTHON:

PyCharm/Jupiter IDE

Hardware

Processors: Intel Atom® processor or Intel® Core™ i3 processor

Disk space: 1 GB Operating systems: Windows* 7 or

later

Python* versions: 2.7.X, 3.6.X

Microcontroller board

Raspberry Pi

Operating Systems

WINDOWS

For Windows:

Intel Core i5 @ 2.5GHz.

Databases

MYSQL, SQLite

Intel Core or Xeon 3GHz (for Dual core 2Ghz) or equal AMD CPU

Programming Languages

Microcontroller board

Operating Systems

Databases

Final Deliverable of the Project Hardware SystemCore Industry ITOther Industries Education , Health Core Technology Artificial Intelligence(AI)Other TechnologiesSustainable Development Goals Good Health and Well-Being for People, Reduced InequalityRequired Resources
Elapsed time in (days or weeks or month or quarter) since start of the project Milestone Deliverable
Month 1Planning and requirement gatheringgathered requirement
Month 2requirement analysisanalyzed requirement
Month 3DesigningHigh Level Design (HLD) Low level design
Month 4ImplementationDatasets
Month 5ImplementationFeeding sign language data to device
Month 6ImplementationImplemented to convert the generated text to speech and vice versa
Month 7TestingTesting of different types of voices and hand gestures to achieve maximum accuracy
Month 8TestingThe final testing will be done manually and automatically
Month 9Deployment & maintenanceIn this phase, we will evaluate the data through data visualization. And will deploy it for further evaluation.

More Posts