Drones are conventionally controlled using joysticks, remote controllers, mobile applications, and embedded computers. A few significant issues with these approaches are that drone control is limited by the range of electromagnetic radiation and susceptible to interference noise. In this proposed pr
Hand gesture controlled drone using machine learning
Drones are conventionally controlled using joysticks, remote controllers, mobile applications, and embedded computers. A few significant issues with these approaches are that drone control is limited by the range of electromagnetic radiation and susceptible to interference noise. In this proposed project, the use of hand gestures as a method to control the drone will be use.
Computer vision-based methods rely on the ability of a drone’s camera to capture surrounding images and use pattern recognition to translate images to meaningful and actionable information.

The proposed framework involves a few key parts toward an ultimate action to be taken. They are: image segregation from the video streams of front camera, creating robust and reliable image recognition based on segregated images, and finally conversion of classified gestures into actionable drone movement, such as takeoff, landing, hovering and so forth.


VIDEO STREAM IMAGE SEGREGATOR:
The video stream image segregator constantly records through the on board camera of the drone and segmented it into sequences of still images.
HAND GESTURE RECOGNITION:
Each image is then analyzed through the hand gesture recognition process, which includes three main steps:
COMMAND MAPPER:
A command mapper transforms the detected gesture into a command, such as a take-off, land, or back off.
ACTION PLANNER:
An action planner takes the command as its input and computes the corresponding course of primitive actions to satisfy the command.
The goal of this proposed project is to enable the hand gesture based control mechanism with maximum possible accuracy even in mediocre drones which can be easily outperformed by the state-of-art drones due to their inbuilt high camera resolutions.
A set of experiments will be conducted to measure gesture recognition accuracies considering the major scene variability, illumination, background, and distance.
Classification accuracies using different machine learning classifiers will be analyze for well-lit, clear background, and within 3 ft gestures recognition.
Limitations of framework and feasible solutions for better gesture recognition will be studied.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Quadcopter (Opensource UAV) | Equipment | 1 | 35000 | 35000 |
| Raspberry Pi Module | Equipment | 3 | 6500 | 19500 |
| Raspberry Pi Casing | Equipment | 3 | 500 | 1500 |
| Raspberry Pi Power Adapter | Equipment | 3 | 600 | 1800 |
| OverHead Expenditure | Miscellaneous | 1 | 5000 | 5000 |
| Stationery | Miscellaneous | 1 | 3000 | 3000 |
| Raspberry Pi Compatible LCD | Equipment | 1 | 6000 | 6000 |
| Sensors Modules | Equipment | 3 | 1000 | 3000 |
| Raspberry Pi Compatible Camera | Equipment | 1 | 3000 | 3000 |
| Total in (Rs) | 77800 |
?An Earthquake is a shaking of the Earth?s crust caused by a release of energy.? Pakistan...
In automotive system, electronics escalating its scope day by day. Now security and safety...
Our project is to design a portable home based neurofeedback system with the alongside dev...
As the line between physical and digital world is blurring very fast, remote monitoring sy...