Drone Location Finder

Problem Statement: Natural disasters occur four times as often as they did in 1970. According to estimates, such events could grow in frequency and ferocity with the effects of climate change, after a catastrophe finding victims can be a difficult and risky p

2025-06-28 16:26:52 - Adil Khan

Project Title

Drone Location Finder

Project Area of Specialization Artificial IntelligenceProject Summary

Problem Statement:

Natural disasters occur four times as often as they did in 1970. According to estimates, such events could grow in frequency and ferocity with the effects of climate change, after a catastrophefinding victims can be a difficult and risky process. In these areas internet or other coverage services are unreliable. Some places are too dangerous and are not easily accessible due to road blockage and numerous other reasons. These situations make it very difficult and time-resources straining for rescue teams to find the injured and provide them with aid and essential supplies. In these cases, human lives are at risk.

Solution:

Due to the advancement in technology in recent years, this race against time and fate can move expeditiously and can save thousands of lives as a result, our vision is the right and proper use of technology. We tackle this problem by developing an autonomous drone for search and rescue operations. An unmanned aerial vehicle (UAV) will fly over the affected area without GPS and Internet services and use its camera to capture the area, decide which part is most affected, which part need help first, create a map of the area, and mark the “interest-points” on the map. Once we found the target, a flight plan will be created for the drone to provide essential supplies to the victims which can save their lives and it will inform their locations to rescue teams to reach them without wasting critical time.

Such a type of help was desperately needed in the 2005 Kashmir earthquake and in the 2010 flood. If provided with proper resources and channels we can develop this drone within a year, this project would evolve into a product that can be offered to rescue teams as a tool for their race. We can make a precise solution for small to very large organizations to perform search and rescue operations.

Project Objectives Project Implementation Method

Acquiring appropriate data sets to train our models, these data sets are gathered from “Microsoft-COCO" and “ImageNet” for our image training model. For the environment, we gathered data from the internet, the video, and images of the previous catastrophes. We also plan to train our model through real-time video feed of different environments and simulation by using a unity simulator which helps us to create a visual environment for training.

Once the data set is collected, the next step was to develop different modules and AI models to perform localization and mapping, object detection, and flight control. To be able to create a map, the drone must know its position within the area at every instance. For this purpose, Photogrammetry and NERF techniques will be very useful to achieve this goal and train our models in these environments. In photogrammetry and NERF, images are taken from different positions using different camera angles. These images are then processed to map the virtual model of the location.

After successfully mapping the area next step is to localize the drone on the map. This is primarily done through GPS triangulation. However, in hilly areas GPS is not very stable and unavailable, so different computer vision algorithms will be used e.g., “ORB Slam2”, “ORB Slam3”, “PySlam2”, and “DSO odometry” with Robot Operating system (ROS). Localization with autonomous flight control will be achieved from these algorithms.

The final task is to identify the targets using different image processing techniques and maintain a log of these decisions, sending reports and images of the situation to the control room or on the computer. For object detection, we are using YOLO (you only look once) deep learning algorithm which applies a single neural network to the full image. This network divides the image intoregions and predicts bounding boxes and probabilities for each region, which is highly effective and accurate. We used transfer learning to train YOLO for our specific needs.

Benefits of the Project Technical Details of Final Deliverable Final Deliverable of the Project HW/SW integrated systemCore Industry ITOther IndustriesCore Technology Artificial Intelligence(AI)Other TechnologiesSustainable Development Goals Industry, Innovation and InfrastructureRequired Resources
Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Total in (Rs) 80000
DJI Mavic Mini (UAV) Equipment17000070000
stationary and printing Miscellaneous 11000010000

More Posts