(SIGN LANGUAGE RECOGNITION) mobile application

Inability to speak is considered a true disability. People with this disability use different modes to communicate with others, there are several methods available for their communication one such common method of communication is sign language. [2] Pakistan sign language (PSL) is one of the

2025-06-28 16:24:57 - Adil Khan

Project Title

(SIGN LANGUAGE RECOGNITION) mobile application

Project Area of Specialization Artificial IntelligenceProject Summary

Inability to speak is considered a true disability. People with this disability use different modes to
communicate with others, there are several methods available for their communication one such common
method of communication is sign language. [2] Pakistan sign language (PSL) is one of the sign languages of
the word used by the Pakistani deaf community. The major problem that we observed is the communication
barrier between normal people and a deaf community. The normal people in our society are completely
unaware of the signs/gestures of PSL [2]. This communication barrier seizes the basic right of
communication between a deaf and a normal. Our contribution in this regard is to decrease the
communication barrier between the normal community and a deaf community. One of the solutions is to
design an automated system, which would be helpful in two-way communication, as if a normal person will
record signs and system would be able to convert it into text or voice, and a deaf person will record the
voice of a normal person and system would be able to convert it in signs. All the above thoughts could be
converted into real-world applications by using some deep learning algorithms and Artificial intelligence to
recognize the signs and voice or text. We are hopeful that this effort will surely help to develop the systems
for reducing the gap between deaf-to-deaf, dumb-to-dumb, dumb to normal and Pakistan deaf and dumb
community to any other deaf and dumb community.

Project Objectives

The objective of our project is to introduce you to a new culture and community, by reducing the barrier
of communication between the normal and deaf or dumb communities. This project can change the
theoretical signed language to digitalized signed language approach. This project will recognize the sign
languages through hand gestures that are very helpful to the community. This approach could be
beneficial in classroom learning for the deaf community. Sign language is a visual language and consists
of 3 major components:
Fingerspelling, Word level sign vocabulary and Non-manual features.

Project Implementation Method

A camera/webcam is used for this software to identify PSL (Pakistan Sign Language). Firstly, we
provided descriptions of the proposed solutions of the real-world problems. Basic functionality of
the software is also included in the descriptive models, application harmony essentials,
performance, attributes, and arrangement boundary which are set in use.
In the definition section, all the functional and visual condition of process are described deeply.
Details of the data objects, their assigns and the whole model data was also given. In the model of
behavior section, the relationship among user and function and features is modeled. In the end, the
presentation of our group, hope for plane and the process model of our group were combined.

Benefits of the Project

The application in this document is called live detection of Pakistani Sign Language Recognition
via camera and webcam. This application is designed and used for normal people and for those who
cannot speak or listen to make their lives easier, as well as government offices that should serve all
their citizens equally, private companies that want to reach and serve deaf and dumb people, to co-
ordinate, partnerships, that aim to help the people with speech and listening problems. Our
application will act as a visual signal (later called "user") standing in front of the camera and
webcam. To specify specific hand touch points with a media pipe, the user's hand area is located in
a rectangular box on the web camera and camera. After tracking the user's hand gestures, the
camera is ready for use. It will provide the output of hand gestures detection into text in front of
camera. Our project will use the output from the camera or webcam and will match the pre-defined
gestures in database which we load in the system. Our project aims to educate the Pakistani Sign
Language and live detection of gestures that translate it into English text. Initially, the system will
detect 26 previously defined gestures. However, it will be possible to explain the additional touch
once the system has been proven to be efficient enough. The program will work on PC, laptop, and
cell phone environments in particular. By connecting the camera to a PC, laptop, and mobile phone
user able to use.

Technical Details of Final Deliverable

As you can show in the methodology fig, when the user opens our application on her/his mobile phone,
there is a welcoming screen will open, and after that, we have put 2 main modules/ screens of our Mobile
application one is "learn signs" and other is "live talk". In the learning, signs feature we simply learn the
user by signs images which will load the database from the backend, and we also categorize the learn sign
module into sub-modules/screens. In learn sign module we have different screens like
1) learn alphabets signs:
In this screen, we will put alphabetical signs like a, b, c...
2) learn numbers of signs:
On this screen, we will put numbers of sign images like 1,2,3,4....
3) learn words signs:
On this screen, we will put some simple words sign images like on this screen, we will put some simple
words sign images like happy, hi, hello ...
4) learn basic sentences signs:
On this screen, we will put some simple sentences signs images like how are you?
5) live talk:
In this feature, the deaf and dumb and normal people can talk and practice through this feature.
Live talk module:
When the user uses this feature the live camera opens and detects the user's hand gestures through a camera
and recognize the hand gesture and gives output in texts. Using hand tracking algorithms in machine
learning.

Final Deliverable of the Project Software SystemCore Industry ITOther IndustriesCore Technology Artificial Intelligence(AI)Other TechnologiesSustainable Development Goals Quality EducationRequired Resources
Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Total in (Rs) 38000
License software, Cloud Server, hosting services Application Programming Interface Equipment13800038000

More Posts