TRAFFIC LIGHT AND SIGNS DETECTION AND CONTROLLING VEHICLES USING DEEP LEARNING
In Autonomous Driving, traffic lights and signs are very important to provide useful information to the autonomous driving such as direction and alerts. ?Traffic light plays an important role in regulating traffic behavior, ensuring the safety of the road and guiding the smooth passage of vehicles a
2025-06-28 16:36:26 - Adil Khan
TRAFFIC LIGHT AND SIGNS DETECTION AND CONTROLLING VEHICLES USING DEEP LEARNING
Project Area of Specialization Artificial IntelligenceProject SummaryIn Autonomous Driving, traffic lights and signs are very important to provide useful information to the autonomous driving such as direction and alerts. “Traffic light plays an important role in regulating traffic behavior, ensuring the safety of the road and guiding the smooth passage of vehicles and pedestrians”. The basis publication of traffic lights detection is the colors and shapes of the signal. Traffic signs defined the rules and regulations of driving, traffic signs have standard appearances such as shapes, colors, and patterns defined by strict regulations.
We are using convolutional neural network CNN based object detection to detect the signals and signs, we can predict different 2D poses of the signs which are triangular, square and circle, it can predict by CNN accurately. Traffic signals commonly contain rectangular shape, vertically or horizontally which can detect by CNN and colors defined the status of the signal. The algorithm will efficiently detect the signal and differentiate the colors of the signal in real-time and then process the information and tell the vehicle what to do? Techniques that depend on earlier guides are restricted in regions without planning data. Learning-based models can defeat huge numbers of the impediments incurred by image processing-based models as they can be prepared on an expansive arrangement of traffic lights that include bolts and lights in various conditions. Learning-based models have additionally been demonstrated to outperform image processing-based models in traffic light detection. YOLO is used to identify traffic lights and a different convolutional neural network (CNN) characterizes the traffic light states. In, YOLO2 is used on the LISA dataset.
Project Objectives- To locate traffic light/signal accurately
- To locate traffic signs accurately
- To classify 3 signs (Red, Yellow, and Green) presented by a traffic signal
- To classify traffic signs.
Hardware Implementation
The vehicle used in the research is improved in Dummy Car. Firstly, we can use a Camera that is connected to a Dummy car since the Camera is utilized to take consistent images to get the traffic signs and signals from this present reality. As per the images accessible through the camera, we can send these images to the raspberry pi to play out the car's control action. The camera is associated with the Raspberry Pi, which kept consistent. To see the detected sign, we associated Monitor to the system.
Why we use Raspberry pi in a Car? In the current circumstances, people are confronted with numerous mishaps during roadways transportation. Simultaneously they lose our life and important properties in those mishaps. To keep away from these issues the system designed with the assistance of Raspberry pi. Digital image processing assumes a significant job in the sign catching and detection system. The image processing algorithms to make the vital move for resizing the caught signs. The Raspberry pi camera port used to catch the road signs with image improvement methods. The installed system little computing platform contemplates the qualities of speed signs. In that sunshine vision time to take the shape examination for perceiving the signs utilizing edge detection algorithms. The target of the proposed work is to actualize the accessible strategy to traffic signal with the assistance of the raspberry pi3 board.
Raspberry pi is a little chip of a single-board computer. There are different model of raspberry accessible in the market for example the Raspberry Pi1 Model B, Raspberry Pi1 Model B+, Raspberry pi2, Raspberry Pi3 Model B. These all are varied in memory limit and hardware highlights like Raspberry pi3 have inbuilt Bluetooth and wifi modules though in past variants these modules were not accessible. It has a 1.2 GHz 64-bit quad-core ARMv8 CPU with 1 GB of RAM.
In the Car, the parameter like Road sign acknowledgment is one of the primary concerns to be estimated. We are utilizing OpenCV innovation with the Raspberry Pi to distinguish the Road sign like Stop sign, Speed limit sign to make the car autonomous. The system will search for the sign, at whatever point it perceives the sign. It stops looking and limits the road sign and shows the message of the sign.
Software Implementation
Traffic Light Detectors: There are two fundamental categories for traffic light detection algorithms: image processing-based models and learning-based models.
Traffic Sign Detectors: Traditional traffic sign detection strategies depend on color and shape data. Our methodology is the first to blend traffic signs and traffic light detection without an enormous misfortune in execution in both of the worldwide class categorizations. There are few signs of traffic we can include in this project which are: Stop sign, Narrow Road Sign, Do Not Enter Sign, Speed Limit Sign, and Hump Sign, etc.
Benefits of the ProjectOne of the key challenges for autonomous driving in urban areas is that the traffic on the roads or highway is highly controlled by the traffic signal and signs. To overcome this challenge, we must build a system for autonomous driving that can detect these traffic signals and signs, and perform an action based on these detected traffic signals and signs to ensure our autonomous car to be in the lane with the traffics in the urban areas. The other problems come when for detecting purpose we take pictures which are relatively small sizes and have a degree of background clutter.
Technical Details of Final DeliverableThe methodology we are going to use in our project will be executed in the following manner: Collecting Images from the camera, pre-processing the collected images, loading and classifying the images into the dataset, training the CNN for detecting and recognition traffic light and sign images from the dataset and at the end validating the model through validation dataset.
- Collecting Images: We will use a small camera that can be mounted on the dummy car so that it can take images of the traffic sign or signal. The camera will have 16 megapixels which can support at least 1920 x 1080 pixels of image resolution at the frame rate of 30 fps.
- Pre-Processing the Images: In the CNN model, we cannot train them on different dimensions of the images. Training images should have the same dimensions and it is possible our images database can have different dimensions. So, for this purpose, we will transform our images into a fixed dimension of 600 x 800 pixels. This transformation of images into a fixed dimension is called interpolating of images. We can interpolate the images into a fixed dimension by using the OpenCV package in our python code.
- Loading and Classifying: Now we will load the images into our dataset by classifying them into different classes based on the traffic signals and signs.
Our Dataset will have at least 8 classes, three of them will be related to the traffic signal (i.e. green, yellow, and red) and 5 of them will relate to different traffic signboards. Each class will have a unique class id which will represent that class (e.g. 1 for red, 2 for yellow, and likewise class id for others as shown in the below table).
Now we will divide the dataset into three categories for different purposes that is we will divide the dataset into training, testing, and validation dataset for the respective purposes.
We will split them in 60:20:20 ratio for training, testing, validation dataset respectively.
- Training the CNN: For training the dataset, we will use the CNN layers. By using CNN our model will be computationally inexpensive and faster as compare to normal neural networks because CNN will assume that input will be always images.
CNN has a sequence of layers in its architecture. Each layer will take an input image and produce the output image with each layer performing different functionality. The CNN architecture consists of a Convolutional layer, Pooling layer, and Fully Connected layer.
Final Deliverable of the Project HW/SW integrated systemCore Industry ITOther Industries Transportation Core Technology Artificial Intelligence(AI)Other TechnologiesSustainable Development Goals Industry, Innovation and InfrastructureRequired Resources| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Total in (Rs) | 69800 | |||
| Pixy Cam 2.0 | Equipment | 2 | 14000 | 28000 |
| Arduino UNO | Equipment | 2 | 1500 | 3000 |
| Motor Driver | Equipment | 2 | 400 | 800 |
| Vehicle | Equipment | 2 | 7000 | 14000 |
| ultra sonic sensors | Equipment | 6 | 200 | 1200 |
| Battery | Equipment | 2 | 4500 | 9000 |
| Wires | Equipment | 80 | 10 | 800 |
| Motors | Equipment | 3 | 1000 | 3000 |
| printing | Miscellaneous | 50 | 100 | 5000 |
| panaflex | Miscellaneous | 3 | 1000 | 3000 |
| Stationary items | Equipment | 1 | 2000 | 2000 |