Drone Location Finder
Problem Statement: Natural disasters occur four times as often as they did in 1970. According to estimates, such events could grow in frequency and ferocity with the effects of climate change, after a catastrophe finding victims can be a difficult and risky p
2025-06-28 16:26:52 - Adil Khan
Drone Location Finder
Project Area of Specialization Artificial IntelligenceProject SummaryProblem Statement:
Natural disasters occur four times as often as they did in 1970. According to estimates, such events could grow in frequency and ferocity with the effects of climate change, after a catastrophefinding victims can be a difficult and risky process. In these areas internet or other coverage services are unreliable. Some places are too dangerous and are not easily accessible due to road blockage and numerous other reasons. These situations make it very difficult and time-resources straining for rescue teams to find the injured and provide them with aid and essential supplies. In these cases, human lives are at risk.
Solution:
Due to the advancement in technology in recent years, this race against time and fate can move expeditiously and can save thousands of lives as a result, our vision is the right and proper use of technology. We tackle this problem by developing an autonomous drone for search and rescue operations. An unmanned aerial vehicle (UAV) will fly over the affected area without GPS and Internet services and use its camera to capture the area, decide which part is most affected, which part need help first, create a map of the area, and mark the “interest-points” on the map. Once we found the target, a flight plan will be created for the drone to provide essential supplies to the victims which can save their lives and it will inform their locations to rescue teams to reach them without wasting critical time.
Such a type of help was desperately needed in the 2005 Kashmir earthquake and in the 2010 flood. If provided with proper resources and channels we can develop this drone within a year, this project would evolve into a product that can be offered to rescue teams as a tool for their race. We can make a precise solution for small to very large organizations to perform search and rescue operations.
Project Objectives- This project aims to help the Rescue teams in the disastrous scenarios to quickly assess the situation, environment, and human condition and take actions accordingly, which will save lives.
- A map of the area will be created after flying a drone over it. This map will help analyze the situation and make decisions accordingly. Only a camera will be used for mapping the area and marking the target’s location on it. This will save time to identify the target and provide essential supplies or first aid until the rescue team reaches.
- The objective is to provide a solution for rescue organizations, to perform search and rescue operations smartly and timely.
Acquiring appropriate data sets to train our models, these data sets are gathered from “Microsoft-COCO" and “ImageNet” for our image training model. For the environment, we gathered data from the internet, the video, and images of the previous catastrophes. We also plan to train our model through real-time video feed of different environments and simulation by using a unity simulator which helps us to create a visual environment for training.
Once the data set is collected, the next step was to develop different modules and AI models to perform localization and mapping, object detection, and flight control. To be able to create a map, the drone must know its position within the area at every instance. For this purpose, Photogrammetry and NERF techniques will be very useful to achieve this goal and train our models in these environments. In photogrammetry and NERF, images are taken from different positions using different camera angles. These images are then processed to map the virtual model of the location.
After successfully mapping the area next step is to localize the drone on the map. This is primarily done through GPS triangulation. However, in hilly areas GPS is not very stable and unavailable, so different computer vision algorithms will be used e.g., “ORB Slam2”, “ORB Slam3”, “PySlam2”, and “DSO odometry” with Robot Operating system (ROS). Localization with autonomous flight control will be achieved from these algorithms.
The final task is to identify the targets using different image processing techniques and maintain a log of these decisions, sending reports and images of the situation to the control room or on the computer. For object detection, we are using YOLO (you only look once) deep learning algorithm which applies a single neural network to the full image. This network divides the image intoregions and predicts bounding boxes and probabilities for each region, which is highly effective and accurate. We used transfer learning to train YOLO for our specific needs.
Benefits of the Project-
This project aims to help the rescue teams quickly assess the condition and take timely actions, which can contribute to saving lives.
-
This will help to create a map of the area beforehand, which can assist rescue teams to take precise actions.
-
This will help to detect people in chaotic environments, faster and more accurately.
-
Usually, drones are operated remotely which still requires humans to reach within a certain parameter of the tragedy. These drones will not require any human interaction, they will be fully autonomous. They will not be dependent on other services such as the internet or GPS to perform.
-
This will save time and resources.
-
Using a drone that has programmable embedded chips, installing “ROS auto-pilot firmware” or using the default Software Development kit (SDK) to control the drone. Using python compiler to implement our programs, accessing the drone’s hardware, controlling its flight, and deploying our Machine-Learning models.
-
For simulation, we will use the Unity game engine to create a virtual environment, and for training our models.
-
For programming Python, Java, C, and C++. Image processing libraries e.g., OpenCV and drone kits, MATLAB, Simulink, etc. Simple, Augmented Reality (AR)/image process techniques to calculate the radius.
-
For Mapping offline google maps, Photogrammetry and NERF, etc.
-
For Simultaneous localization and mapping (SLAM), ORB Slam2, ORB Slam3, Pyslam2, DSO Odometry, etc. will be used, because they are open-source and they support ROS.
-
For Flight Controller python scripting will be used.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Total in (Rs) | 80000 | |||
| DJI Mavic Mini (UAV) | Equipment | 1 | 70000 | 70000 |
| stationary and printing | Miscellaneous | 1 | 10000 | 10000 |