Multiple Object Detection and Lane Following for Autonomous Vehicle

The world is changing due to the high influx of technology into everyday life, transportation is getting shifted to the world of artificial intelligence where the cars will be able to self-drive. This project focuses on utilizing the present techniques and alter them to the needs that perfectly suit

2025-06-28 16:28:38 - Adil Khan

Project Title

Multiple Object Detection and Lane Following for Autonomous Vehicle

Project Area of Specialization Electrical/Electronic EngineeringProject Summary

The world is changing due to the high influx of technology into everyday life, transportation is getting shifted to the world of artificial intelligence where the cars will be able to self-drive. This project focuses on utilizing the present techniques and alter them to the needs that perfectly suit the proposed designed vehicle. The foundation of the self-driving car rests upon detection of its surroundings by doing object and lane detection. The idea is to use sensors along with a main primary camera that will feed data about the surrounding to the microcontroller. The integrated hardware will be programmed in python by using CNN. The sensors will gather data about the surroundings, this includes live feed from the camera through which the track that the car will run on is visualized. This will be done by using image processing techniques that will convert those images and live feed into useful data that can be utilized by the controller. The gathered data will be trained using CNN to an extent where the car will be able to decide whether it should turn according to the situation in run time. This system consists of two parts. The first part detects the road signs in real time. The second part classifies the German Traffic Sign Recognition Dataset (GTSRB) dataset and makes the prediction using the road signs detected in the first part to test the effectiveness. Convolutional neural network is used based on the LeNet model in which some modifications were added in the classification part. Our system obtains an accuracy rate of 99% in the detection part and 96.23% in the classification part. The prototype is able to detect lane lines and drives the car in a particular lane while following traffic rules and ensuring the safety of the passengers. The prototype also includes IR sensors to avoid any sudden collisions while increasing road safety. Self-driving capability is developed using computer vision and deep learning techniques.

Project Objectives

The idea of this project originates by witnessing accidents due to carelessness of drivers which could become extremely harmful sometimes. In this project, the prototype self-driving can detect multiple objects such as humans, animals, traffic signs and traffic lights on the road. It is also capable of determining road lanes and drives the car in a particular lane with minimal human input. Self-driving capability is developed using computer vision and deep learning techniques. Moreover, it will increase the safety of the passengers and there are thousands of people who die each year in road accidents due to the carelessness of human drivers.

Project Implementation Method

Software Part

First of all, add required libraries and then add path of training and testing dataset. Also initialize some parameter at the top for the sake of ease if we want to tune our model on different values. Then import all the classes present in the dataset. After importing dataset then split that dataset into training, testing and validation dataset. After that read CSV file to assign labels and different ID to each class. Then display some random images from dataset. Then preprocessing is done to normalize the images and then convert into gray scale so that pixel’s values are between 0 and 1. Then augmentation of data is done to increase the quantity of dataset by flipping, rotating, shifting and shearing which results in multiple transformed copies of the same image. As larger the data greater the accuracy. Then add layers to CNN model. 60 filters were used in convolutional layer of size 5x5 and pooling layers size are 2x2. Also, ReLU and SoftMax activation function are used in the model SoftMax function is use at the end of the layers to find the probability of object and ReLU function is use to take only positive values of pixels. After adding layers, the model is compile using Adam Optimizer. Then save the model in pickle (. p) format. At the end of model plot a graph to find the loss and accuracy of our model. Also, confusion matrix is implemented for the comparison between predicted image and actual image. The image data should be normalized so that the input data has mean zero and equal variance. Here, using a simple normalization technique (image = image / 255) for all the pixels in the image. It normalizes the pixel values between 0 and 1. In terms of contrast, after normalization, the black has become blacker, white has become whiter. The color separation is clear now. Normalizing the image helps the model to limit its range to between 0 and 1. instead of searching all pixel values from 0 to 255, it can limit its search to numbers between 0 and 1. As color increases the complexity of the model so for training process convert the input data images to grayscale. it is a one-layer image from 0-255 whereas the RGB have three different layer image. So, this is a reason grayscale image prefer for training of model instead of RGB.

Hardware Part

Prototype self-driving car structure is one of the main building blocks of a self-driving car. Acrylic sheet is used to make the base of the prototype self-driving car. Motors are interfaced with Raspberry Pi and speed of motors is controlled through PWM. Prototype car has to detect obstacles in the way of its motion. Obstacle can be static or moving, such technique should be used which can counter all type of obstacles in the surrounding of the car. Infra-Red (IR) is a digital sensor and it is used to detect obstacles Infront of the car to prevent collision.

Benefits of the Project

Automation can help reduce the number of crashes on our roads. Government data identifies driver behavior or error as a factor in 94 percent of crashes, and self-driving vehicles can help reduce driver error. Higher levels of autonomy have the potential to reduce risky and dangerous driver behaviors. • Reduction in traffic deaths. • Drop in harmful emissions • Improvement in fuel economy • Increase in lane capacity • Reduction in travel time Autonomous cars are also expected to solve the 'last mile problem' (whereby people struggle to travel the final mile between their home and the public transport drop-off point).

Technical Details of Final Deliverable

In this project, a simple base has been used for making a prototype self-driving car. The idea behind a self-driving car is to sense its surroundings and take movements accordingly. Project hardware was programmed through CNN (LeNet) using the python. Then Data is classified into different classes. Input data is converted into suitable format for deep learning model. LeNet has been used to train data. The DC encoder motors are used to drive the car. The motors were controlled using an H-bridge module. The signals to the H-bridge module were given through the Raspberry pi board using PWM. The Raspberry pi board gets the signal from a computer which runs the computer vision algorithm to navigate the car. Driverless cars are a next step in transportation technology. The Algorithm used for sign detection, detects with 99 percent accuracy. HSV is used detection part to detect the road signs captured by the camera. Motor assembly gets control signals from Raspberry Pi, also controls servo motor. Raspberry Pi is getting input data from IR sensor, and webcam. Processing is performed on input data according to required specifications and output signals are generated correspondingly. Infra-Red (IR) is a digital sensor and it is used to detect obstacles Infront of the car to prevent collision. Webcam gives the real time feed to raspberry pi and then preprocessing is done and features are extracted from the input feed. And then the object detection and lane detection algorithms are applied and the Raspberry Pi gives signal to motors and drive the car accordingly. LeNet (CNN) model is used for object and traffic sign detection and canny edge detection is used to detect lane lines. While moving towards the destination the prototype car is also performing obstacle avoidance. The final prototype is able to detect multiple objects on the road and can detect traffic signs and can drive on the road using lane detection. With further analysis and hardware execution the project could be practical in real life scenarios. More developments can be made in this project by using GPU based processing system having high speed and a high-resolution HD camera. In addition, computer vision methods such as camera standardization, structure from motion, etc. can have a very reflective impact on navigation precision.

Final Deliverable of the Project HW/SW integrated systemCore Industry EducationOther IndustriesCore Technology Artificial Intelligence(AI)Other TechnologiesSustainable Development Goals Decent Work and Economic Growth, Industry, Innovation and Infrastructure, Sustainable Cities and CommunitiesRequired Resources
Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Total in (Rs) 68200
Raspberry Pi 4 Equipment13000030000
DC Encoder Motor Equipment4300012000
Hbridge Equipment45002000
Hardware Structure Equipment120002000
Camera Equipment120002000
Male to Female Connectors pack Equipment2300600
Male to male connectors pack Equipment2300600
LiPo Battery Equipment180008000
Cooling Fan Equipment110001000
Miscellaneous Hardware components Miscellaneous 11000010000

More Posts