HAND GESTURE SYSTEM FOR DISABLED PEOPLE
In this project we are taking gestures from human hand via sign language from patient and want to delivered it to attendant as a message. The message will be casted to LCD Screen and will be displayed to the
2025-06-28 16:27:32 - Adil Khan
HAND GESTURE SYSTEM FOR DISABLED PEOPLE
Project Area of Specialization Artificial IntelligenceProject SummaryIn this project we are taking gestures from human hand via sign language from patient and want to
delivered it to attendant as a message. The message will be casted to LCD Screen and will be displayed to
the patient.The expressions that are most commonly utilized in human communication are speech and
gestures. The visual data from a camera is the focus of our endeavor. A Raspberry Pi processing platform
is attached to a pi camera, which is utilized for recognition. These identified motions are then transformed
to cloud and sent to the patient via a software-based application, allowing the assistant or attendant to
interact with the patient simply. The goal of sign language translation is to convert typical sign language
or gestures into messages that can be passed on to an assistant, allowing for easier contact with deaf and
dumb individuals. The suggested technology is being developed in order to improve the quality of life for
dumb and deaf individuals. The basic approach usedin this research is image processing.The processing
platform is the Raspberry Pi. Basic image processing techniques such as blurring and masking, as well as
programmed logic, are used in processing. The basic input to the processing system is a continuous real-
time stream of visual data recorded by the pi camera.We recognize hand motions from handicapped persons
or patients using Human-Computer Interaction. The hand motion will be connected with the cloud.
Software-based applications will be linked to the cloud. The software-based application will correlate
gesture information with the assistant and display the assistant's answer on the screen.In short, the goal of
this project is to create a helpful tool that employs gesture recognition to bridge the communication gap
between the disabled and the general public. It is possible to transform hand motions into text and show the
text after contact with the helper to the patient using the developed project. The suggested system's concept
provides more potential for future growth.
The objectives of the project are as follows.
?To improve the communication gap between deaf patients and attendant using advance
technology of artificial intelligence.
?To assign a preset finite number of gesture classes to the provided hand gesture data represented
by specific attributes.
?To recognize the static hand gesture images (i.e. frames) based on shapes and orientations of hand
which is extracted from inputimageand simple background conditions.
?Hand Gesture Detection and linking to Cloud.
?Linking Cloud to Software based application.
?The initial step in implementation of the project will includeinput of the images then it will be
fragmented into frames.
?After the initialstep, image processing will be done through image filtering.
?The next step will be of Gesture recognition through Kinect sensor and saved to the cloud.
?After that the correspondence image will be converted to text.
?Meanwhile the notification will be come on the mobile application and shown on the screen.
?To train the model and to improve the accuracy.
The effective time is managed between patient and attendant whether the number of patient and
attendant increases as per need or they are not able to increase or decrease the need done with respect
to the time frame.
Better Response time by attendant is attained, for instance the there are multiple patients lying on bed
and connected to the single attendant. The attendant will pay the time with respect to the need of patient
which is being instructed by the means of an application.
Communication on the basis of sign language which makes this hand gesture recognition to use at all
forms. For Instance, the tourist from abroad (i.e China) do not know English so they will use sign language
for communication. The Sign language is basically formed for the disable people who cannot listen or
speak the any language. Software in the terms of Android Application, Azure and hardware interactions
Kinect Sensor, Raspberry pi and any android device are prevailed in this project.
Multitasking by the means of attendant is being in the state of presence, attendant can involve with more
patients at the time rather than sitting at one patient and continuously examining the patient.
The Gesture from the human hand is recognized by the means of Kinect sensor. The various
gestures from human hand is recorded or captured by means ofOpen CV in the terms of Kinect
sensor. At the end of detection process, around 7 to 12 gestures pattern havebeen recorded.
After that fragmentation of frames have been done after the conversion through the Kinect
sensor. When the gesture is collected thefragmentation is done in the multiple frames.
?Image filtering is done byadjusting the colors of pixels in an image to change the appearance.
filterscan be used to increase contrast and provide a range ofspecial effects to gestures we
collected.Imagefiltering is done by smoothing,sharpening, and edge enhancement.The
recognized gesture is casted towards the Android Application. The message from the Android
Application is casted towards the patient by the means of LCD Screen.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Total in (Rs) | 75000 | |||
| Kinect Sensor | Equipment | 1 | 22000 | 22000 |
| Raspberry pi 4 4gb + Micro HDMI To HDMI Converter | Equipment | 1 | 31000 | 31000 |
| LCD 17 inch +Adapter Power Supply USB Converter For Kinect Sensor + SanDisk Extreme 128 Gb Micro SD Card 100mb/S | Equipment | 1 | 17000 | 17000 |
| Miscellaneous | Miscellaneous | 1 | 5000 | 5000 |