In this technological era, automation and robotics are on the top of the horizon to assist human beings. Self-driving cars known as autonomous vehicles is an emerging field, as it senses the environment and moves safely with no or little human effort. Many algorithms are designed and being implement
Simultaneous Localization and Mapping using Neural Network
In this technological era, automation and robotics are on the top of the horizon to assist human beings. Self-driving cars known as autonomous vehicles is an emerging field, as it senses the environment and moves safely with no or little human effort. Many algorithms are designed and being implemented on autonomous vehicles for estimating the environment using a bunch of sensors for movement and path planning.
Simultaneous Localization and Mapping (SLAM) is one of the algorithm which is being used in this domain of automation. This algorithm is used to navigate the vehicle in an unknown environment by building a map of this environment and parallelly providing the pose of the vehicle in the generated map. In short, SLAM is used for navigation and robot mapping simultaneously. SLAM algorithm is combined with some other path planning algorithms to provide a fully autonomous movement of the vehicles and reach to its targeted location.
In this project, we will be designing a 4WD Omnidirectional wheel robot that will move autonomously in an unknown environment using the SLAM algorithm while parallelly providing the map of that environment and the estimated pose of the robot in that map. The robot will also handle the real-time moving objects intelligently using the VFH algorithm. After the training of the Neural Network model of the robot, the Neural Network based autonomous 4WD omnidirectional robot provides a fully autonomous drive in any environment.
Driver-less car refers to a term heaving a car which operates autonomously whether a human present or not. From the top view there are three major components of this SLAM project; one is mapping the environment & localizing the vehicle along with universal obstacle avoidance and the second is omnidirectional control and the other is to train the neural network. To implement universal obstacle avoidance there are numerous algorithms e.g Vector field histogram and obstacle-dependent gaussian potential field and for robot mapping and navigation different SLAM algorithms are being used e.g FastSLAM and DP-SLAM. But the real problem is to integrate both algorithms and to train the neural network and provide a continuous drive of the omnidirectional vehicle.
Environment Mapping & Localization of Robot
Environment mapping can be done manually and autonomously by using different techniques and sensors. The manual mapping can be an easy task as a user is controlling the vehicle to build a map of an environment by covering the unknown region in the environment, but in autonomous mapping, we may find difficulty in covering the unknown regions and detecting the loop closures to build a perfect map for navigation. Localization of any vehicle can be done through the Global Positioning System (GPS) and Extended Kalman Filter (EKF), but in SLAM we estimate the pose of the vehicle without using GPS.
Robot Control using Omni-Directional
There are two major movements Non-Holonomic and Holonomic. Non-Holonomic movement is also called differential drive movement which needs the turning and steering angle for the rotation. If an unknown environment has a very small area then we need a movement that needs zero turnings to overcome the problem. Next, the major problem is the continuous drive of the omnidirectional movement, as we are going to implement real-time obstacle avoidance which may cause a delay in decision making for directional movement, thus there is some difficulty to provide a continuous movement of the robot.
Training of Neural Network
Training the Neural Network (NN) is not an easy task. As there are two major types of NN, Recurrent Neural Network (RNN) and Feedforward Neural Network (Forward NN). The main difference consists of the feedback connections in the hidden layers of RNN which gives an ability to memorize and use the previous input along with the current input. We will be training our NN model using the back-propagation method, for that we required a lot of data to perfectly train our NN model. We can acquire the data from the sensors in different environments and get online data as well.
In SLAM, a robot is placed in a priori unknown environment and tries to build a map of the environment while simultaneously localizing the pose of the robot relative to this generated map without the need of GPS. The robot will navigate in a free environment and it will handle any obstacle between the path using vector field histogram (VFH) algorithm. The robot performs view-based SLAM using extended Kalman filters to fuse data from encoders and laser ranger. The combination of SLAM with the neural network will optimize the mapping process during localization. This project provides the navigation to unmanned vehicles, and it is very useful for surveillance with unmanned air vehicles, autonomous vacuum cleaning, autonomous car parking, autonomous aerial mapping in the dynamic environment, reef monitoring, a path-finder in Space, and underground exploration of mines.
Simultaneous Localization and Mapping (SLAM) algorithm is designed using a Neural Network (NN) to map any unknown environment simultaneously provide the mobile robot location in the generated map using different sensors. This project also provides a universal obstacle avoidance using Vector Field Histogram (VFH). We Integrated these two algorithms along with Extended Kalman Filter (EKF) and NN to provide navigation and path planning in an unknown environment. By integrating the algorithm, we also improve the efficiency of navigation. The GUI interface of robot mapping and localization of the robot is provided by Robot Operating System (ROS). The robot transfer the data using ROS Nodes to the PC where the map is generated and the location of the robot update on that map. This map is used for path planning of the unknown environment in future work. The backpropagation method is used to train the NN model using virtual data set from the internet and some real data from the testing Light Detection and Ranging (LIDAR) sensor.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| LIDAR Sensor | Equipment | 1 | 13890 | 13890 |
| Motor Driver | Equipment | 1 | 8915 | 8915 |
| Raspberry Pi 4 | Equipment | 1 | 12323 | 12323 |
| Omni-Wheels 4 Inch | Equipment | 4 | 4875 | 19500 |
| LiPo Battery | Equipment | 1 | 10000 | 10000 |
| Motors | Equipment | 4 | 1300 | 5200 |
| Tax | Miscellaneous | 1 | 1475 | 1475 |
| Chassis Structure | Miscellaneous | 1 | 5000 | 5000 |
| Voltage & Current Regulators | Miscellaneous | 2 | 800 | 1600 |
| Wires | Miscellaneous | 1 | 300 | 300 |
| Battery Shipping | Miscellaneous | 1 | 720 | 720 |
| Chassis Welding | Miscellaneous | 1 | 900 | 900 |
| Total in (Rs) | 79823 |
Many people use to keep fish in their houses and offices as a hobby as well as d...
The face mask detection-based door control is used for the detection of whether the person...
In this project, The teacher will able to mae a record of his credit hours and also mark t...
This a Heavy duty paper shredder which can destroy more than 70 pages (a4) at one stroke a...