With the rapid development of artificial intelligence and pattern recognition technology, intelligent robots have entered all aspects of industrial automation and human life. One of the most important features/functions of robot intelligence is to move autonomously, which is usually supported by pre
Autonomous 2D Mapping of Indoor Environment using Multiple LIDARs & ROS
With the rapid development of artificial intelligence and pattern recognition technology, intelligent robots have entered all aspects of industrial automation and human life. One of the most important features/functions of robot intelligence is to move autonomously, which is usually supported by precise localization and reliable navigation. To achieve this, the common technique is Simultaneous Localization and Mapping (SLAM). SLAM includes two parts: localization and mapping, where localization is an important premise of mapping. The SLAM problem can be solved by maintaining a robot-state (pose and map) probability-density function or its moments. Depending on whether the environment is pre-given, localization can be divided into localization in a known environment and localization in an unknown environment. The localization in a known environment mainly considers the problem of positioning accuracy. Localization in an unknown environment often requires the use of external sensors to obtain information for estimating the robot’s position after processing.
The approach we have adopted is both simple and novel. We are using multiple Autonomous SLAM Robots for the sake of mapping to cover the maximum area of Indoor environment. As the area to be mapped is totally unknown that’s why we’re using autonomous robots which will be self-driven, for the sake of collecting data for map we are using LIDARs which will be mounted on both of the SLAM Robot and each robot will be connected to a common server through wireless connectivity in a ROS environment. Each robot will give the data for map to the main server separately and after getting data and drawing map of both of the area, the ROS running on the main server will combine of the maps and gives us the final combined and completed map the indoor area that we’ve selected for.
To make the vehicle autonomous we’ll use ultrasonic sensors or data that is coming from the LIDAR to check the obstacles that we need to avoid while moving and using these kind of data we’ll develop the logic for operation to move our robot accordingly and autonomously.
The main objectives of our project includes:
For the purpose of making SLAM robot we’ll have a mobile vehicle that will be equipped with a Raspberry Pi 3B (microcontroller) and another controller to operate motor driver and other supportive of the robot. ROS is the main tool which will play a vital in the completion of this project as all the algorithms for LIDAR will be executed in ROS, which will be installed on the each of the Raspberry Pi controller. The controller mounts for the purpose of driving motor will also connected from the Raspberry Pi. On the other side we’ll have a PC in which the ROS will also be installed and it it’ll become the ROS master. All of the robots and PC should be connected to a common server so that they can be linked.
A communication is established between them (discussed below), the PC will send control command and the Raspberry Pi will read them, process them and gives appropriate commands to the motor driver through the controller that will be mounted to operate that motor drivers (discussed below), to control the motors to move the vehicle. The Pi will taking ranges from the LIDAR, process it and send the output map feed and live localization information to PC. The map and location of the vehicle inside the map will visualized on the PC.

|
The above block diagram shows the flow of operation performed in the whole project and it is to be seen that the things are connected to a single network through Wi-Fi connection.

|
| Block Diagram of Vehicle which shows the algorithm of both SLAM |
Block Diagram of Vehicle which shows the algorithm of both SLAM
Block Diagram of complete project that shows the complete flow.
To get the ranges of real environment that where the obstacle available or not we’ll use a LIDAR sensor which is based on the principle of light transmitting and receiving at Omni direction, the receiving time will define the range of obstacle which can be calibrated and used. For the purpose of motor driving an Arduino will be placed which is connected to the Raspberry Pi and the node for that Arduino will be running on the Pi controller.
To drive the motors of the vehicle the H-Bridge module of L298M will be used which will be connected from Arduino and the ultrasonic sensors mounted will also be connected from that Arduino.
For the purpose of making the vehicle autonomous we'll use the data of LIDAR which will help us to get that, where the obstacle is present and through the alogoruithm it will be set that where the vehicle have to move to complete the desired task.
| Block Diagram of Vehicle which shows the algorithm of both SLAM |
The ultimate purpose of smart city mission is to develop cities to update architecture and...
The main purpose of this project is to make Admission Counselling System for students. Thi...
Increased population of Pakistan imposes numerous challenges on health & care sector,...
Lecture recording refers to the process of recording and archiving the content of a lectur...
The destruction of property, the loss of life, and permanent disabilities that may result...