Adil Khan 9 months ago
AdiKhanOfficial #FYP Ideas

PPT controling by using hand gesture and vision based mouse

The Hand gesture recognition system has become an important role in buildingefficient human-machine interaction. Implementationusing hand gesture recognition promises wide-ranging in technology industry. The MediaPipe as one framework based on machine learning plays an effective role in deve

Project Title

PPT controling by using hand gesture and vision based mouse

Project Area of Specialization

Artificial Intelligence

Project Summary

The Hand gesture recognition system has become an important role in buildingefficient human-machine interaction.
Implementationusing hand gesture recognition promises wide-ranging in technology industry. The MediaPipe as one
framework based on machine learning plays an effective role in developing this application using hand gesture recognition, with the result has shown an accuracy performance of 95%. We would like to extend our system further to develop collaboration with other devices and other human body parts and experiment with both static and dynamic hand gesture recognition systems.

This project describes a system that controls computer applications with the help of hand gestures. The method proposed here successfully created a hand gesture
recognition system, that is able to recognise which gesture is performed by the user and accurately perform the functionality associated with it. Presently, the webcam, microphone and mouse are an integral part of the
computer system. Our product which uses only webcam would completely eliminate the mouse. Also this would lead to a new era of Human Computer Interaction (HCI) where no physical contact with the device is required.

Project Objectives

The basic objective is to develop a virtual mouse using the concepts of hand gesture recognition and image processing which will ultimately move the mouse pointer according to the hand gestures, which will be defined as per the convenience of the user and reducing the cost of hardware. Physically free Human for controlling a Keyboard and Mouse. To develop a system that works without the internet. The goal of static hand gesture recognition is to classify the given hand gesture data represented by some features into some predefined finite number of gesture classes.
The main objective of this effort is to explore the utility of two feature extraction methods, namely, hand contour and complex moments to solve the hand gesture recognition problem by identifying the primary advantages and disadvantages of eachmethod.
The proposed system presents a recognition algorithm to recognize a set of six specific static hand gestures, namely: Open, Close, Cut, Paste, Maximize,
Minimize and Edit. The hand gesture image is passed through three stages, namely, pre-processing, feature extraction, and classification.

Project Implementation Method

In order to build this application, we have formulated a research methodology. This methodology is used to explain the steps of this research from the start to the beginning. The output generated from this work is an application that is able
to detect the presenters’ hands and recognize their hands’ patterns. Moreover, those patterns will be used by the application to determine what action will be made in the presentation process. For example, hand shape that shows number two is used for “next slide action” and hand shape that shows number three is used for “previous slide action”.
Phase-1
Capture
At this stage, the application will capture the projection screen as the background ofthe object. The process at this stage just show display captured by the web camera
Phase-II
Transformation
In this stage, projection screen captured by web camera will be transformed by the file that stores the coordinates taken at the calibration stage. The calibration stage is the stage of making the coordinates of the projection screen captured by
the web camera to be used as a scope limitation that will be projected on the screen.
Phase-III
Image Processing
At this stage, the display / image captured by the web camera will be processed through some kind of process in image processing, and it will undergo stages of the crop, gray scale, saturate, and the threshold image.
Phase-1V
Hand Detection
Hand detection is the most important stage. At this stage, the hand object captured by the web camera will be recognized as the hand that will be further processed to be used as an input to operate the computer. In this phase the Haar-
Training algorithm that is used. Haar-Training algorithm requires the data to be used as samples in the training process in order to detect objects well.
The required data are the positive and negative data.
Phase-V
Hand Recognition
After going through the hand detection stage then in the next stage, there will be the recognition of a pattern or shape of a hand. At this stage, the hand pattern recognition is determined based on the number of fingers of the hand which is detected which would then be used as an input in this application. In this phase,
it was using the Convex Hull algorithm.
Phase-VI
Event Handler
This step is the final stage in building this application. At this step all the processes prior will be implemented to be used as an input in operating a computer.

Benefits of the Project

Humans can communicate and control the computer with body language. Institutes/Universities and offices used our project for the presentation process.
Don’t need expensive and risky hardware like a Computer, Mouse, Keyboard. This application is run on a Raspberry-Pi device and used a web camera for captured a image of gesture.

Technical Details of Final Deliverable

MediaPipe Hands is a high-fidelity hand and finger tracking solution. It employs machine learning (ML) to infer 21 3D landmarks of a hand from just a single frame. Whereas current state-of-the-art approaches rely primarily on powerful
desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands. We hope that providing this hand perception functionality to the wider research and development
community will result in an emergence of creative use cases, stimulating new applications and new research avenues.

For Computation processing we are using Laptop/Raspberry pi (1 GB, 500 Mb recommended) & we use Web-Cam for Hand Recognition and from the software end we use MediaPipe because is the simplest way for researchers and
developers to build world-class ML solutions and applications for mobile, edge, cloud and the web.

For Face Recognition we’ll first perform face detection, extract face embeddings from each face using deep learning, train a face recognition model on the embeddings, and
then finally recognize faces in both images and video streams with OpenCV. To identify the poses of the hand differently could be calculated using 21 key points on hand landmarks. This identifying could be done with, firstly, determine every finger of the hand condition is open or close.

PACKAGES USED
Google MediaPipe Hands
Python OpenCV
Python Tkinter

Final Deliverable of the Project

Software System

Core Industry

Education

Other Industries

Education , IT , Others

Core Technology

Artificial Intelligence(AI)

Other Technologies

Augmented & Virtual Reality

Sustainable Development Goals

Quality Education

Required Resources

Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Thesis Report Books Miscellaneous 324157245
Project Prop ,Presentation, SRS, SDS, and other documentation Prints Miscellaneous 35001500
CD for software Miscellaneous 2100200
Total in (Rs) 8945
If you need this project, please contact me on contact@adikhanofficial.com
Design Electric Vehicle Charging Station with AC Grid Power Generation...

Electric charging stations are the future of vehicle refueling, rather than using typical...

1675638330.png
Adil Khan
9 months ago
Virtual House Modeling

When coming to the topic of 2D architectural blueprints, people find it difficult to under...

1675638330.png
Adil Khan
9 months ago
ROMANIZED SINDHI TEXT FOR POS TAGGING AND SENTIMENT ANALYSIS

Artificial intelligence is advancing dramatically. It is transforming our world day by day...

1675638330.png
Adil Khan
9 months ago
IOT based Dynamic Automated Car Parking System

The primary cause leading to traffic congestion is the high number of vehicles caused by t...

1675638330.png
Adil Khan
9 months ago
Lets bid

The vision is to create an online auction app where items can be put up for auction and th...

1675638330.png
Adil Khan
9 months ago