Design & Development of RoboCar with AI based Traffic Sign Board Interpretation
Artificial Intelligence can greatly increase the efficiency of the existing economy. But it may have an even larger impact by serving as a new general-purpose method of the invention that can reshape the nature of the innovation process and the organization of R&D. We distinguish between automat
2025-06-28 16:26:06 - Adil Khan
Design & Development of RoboCar with AI based Traffic Sign Board Interpretation
Project Area of Specialization Artificial IntelligenceProject SummaryArtificial Intelligence can greatly increase the efficiency of the existing economy. But it may have an even larger impact by serving as a new general-purpose method of the invention that can reshape the nature of the innovation process and the organization of R&D. We distinguish between automation-oriented applications and the potential for recent developments in “deep learning” to serve as a general-purpose method of the invention. It has truly stimulated the development and deployment of autonomous vehicles (AVs) in the transportation industry. Fueled by big data from various sensing devices and advanced computing resources, AI has become an essential component of AVs for perceiving the surrounding environment and making the appropriate decision in motion. To achieve the goal of full automation (i.e., self-driving), it is important to know how AI works in AV systems, and that is what we have worked for.
The concept of this project is to design and develop a Smart Automobile with the idea of AI-based traffic signboard interpretation. In our project traffic sign recognition algorithms are used to detect traffic signs, warn the distracted driver, and prevent his/her actions that can lead to an accident. A real-time automatic speed sign detection and recognition can help the driver, significantly increasing his/her safety. So, to solve the concerns over road and transportation safety, an automatic traffic sign detection and recognition system has been introduced here. This can detect and recognize traffic signs from and within images captured by the camera. In adverse traffic conditions, the driver may not notice traffic signs, which may cause accidents. In such scenarios, this system comes into rapid action. Thus this research aims to develop an efficient system that can detect and classify traffic signs into different classes in a real-time environment.
To provide fast processed results, this project aimed to demonstrate the use of simple shape recognition algorithms that as Single Shot MultiBox Detector MobileNet on a microcontroller such as Raspberry Pi. For the detection process, each of the active sign’s datasets is collected that contains numerous images.
The system will be built on an acrylic sheet and our speed control motor packages include gear motors for acceleration purposes and to allow the reduction from an initial high speed to a lower one without negatively affecting the mechanism. A microcontroller will be used to interface between hardware and software along with a motor driver, sensors, and a camera. The inspiration of the project is to form an auto-navigational car that can course through known or pre-programmed facilitates independently without any control by people. As well as societal benefits in terms of comfort and security, mechanical headways, and the most recent slant in vehicle automation.
Project ObjectivesWe understand the significance of Driverless cars in today’s world. Traffic sign detection and recognition have gained importance with advances in image processing due to the benefits that such a system may provide. The recent developments and interest in self-driving cars have also increased the interest in this field. An automated traffic sign detection and recognition system will provide the ability for smart cars and smart driving. Even with a driver behind the wheel, the system may provide vital information to the driver reducing human errors that cause accidents. Certainly, with such a system integrated into vehicles, it is expected that the number of car accidents will be reduced greatly saving human lives and the monetary value associated with car accidents.
Keeping this in mind, we have come up with the idea of Designing and Developing a robotic car that can detect and identify traffic signs using Artificial Intelligence algorithms and respond accordingly.
Our core objective is to create a new generation of autonomous vehicles capable of assisting mankind in all activities that necessitate the ability to interact actively and safely with their environment and to develop a ground Robo-vehicle whose core technology can be adapted to assist passengers with minimal effort.
For the design and development, we must ensure to collect a good dataset. Data preparation is a key step in an Artificial Intelligence Project. We chose Single Shot Multibox Detector(SSD) as our object detection Algorithm in a TensorFlow framework, that best fits our project. We have collected our unique dataset to assure the best accuracy.
Now, our target is to train our model from the given dataset by the end of March. After that, we will start interfacing our hardware(RoboCar) with our AI model.
The hardware implementation involves a cost-effective design of a Robotic car, motor connections, and choosing a good microcontroller, which will act as a portable computer for us and is the most significant part to help interface the Machine Learning Model with the Hardware.
Our goal is to make an end-product that will be a Driverless Car (Robotic) that will be able to detect Traffic signs and take action accordingly.
Project Implementation MethodFor the implementation of the Traffic Sign Recognition system, we worked on Tensor Flow and Single Shot MultiBox Detector (SSD). Tensorflow is an extensively used platform with a number of tools that make it simple to create a model for machine learning applications and in terms of training time, accuracy, loss, and computation time SSD models outperform other convolutional neural networks (CNN).
In the training process, Tensorflow is used to train Single Shot MultiBox Detector (SSD) MobileNet over a custom dataset of five different traffic signs. The dataset we collected contains the following traffic signs:
- Stop
- Turn Left
- Turn Right
- Speed limit
- Pedestrian crossing
The dataset will then be annotated manually using the custom image annotation tool ‘Labelimg’. This tool helps to create a bounding box (rectangle) surrounding every annotated and labeled object from the image. The bounding boxes or rectangles together with their associated label names are then stored in an XML file. This XML file also contains the name and the location of the image file as well as the coordinates and labeled names of every item in that image. This dataset will then be trained over Single Shot MultiBox Detector (SSD) MobileNet.
For our project, the generated training pipeline configuration file will contain the definition of five different object classes, a fixed input image size (for example 300x300 pixels per image), and a suitable batch size that will produce better results.
After the completion of the training process, the trained model will then be converted to the lite version using the TensorFlow Lite converter and can be used as the quantized model. This lite version will then be used for the implementation of a traffic sign recognition system in the microcontroller. TensorFlow Lite API will interact with our model by providing the resized image frame received from the real-time camera. The model will feed the received images to the trained CNN and then will return the set of recognized objects. After the detection process in the first frame is completed, the TensorFlow Lite API will notify the program that it is ready for the next frame. Every received frame will be deleted while running the prediction on another frame to keep track of the real-time received frames from the camera. After making sure that our model is making the right prediction, we will make use of this information to guide our Robocar to do a certain task on the certainly received information.
Simultaneously, our hardware will be comprised of a car model setup with a camera, some sensors, a microcontroller, and some miscellaneous parts.
Benefits of the ProjectAs the world moves steadily toward a transportation system powered by self-driving cars, our smart vehicle may provide several benefits to society.
For monitoring the surroundings, the vehicle is equipped with sensors and a camera. The most often used way of developing such a system is to combine a camera with computer vision technology. It is because, in comparison to other sensors, a camera gives a lot of information and is a low-cost instrument. As one of the most essential pieces of information from the camera, the traffic road sign contains a wealth of information that is necessary for navigation.
Our Smart Vehicle has the potential to minimize automobile collision deaths and injuries in the future, particularly those caused by driver distraction. Because the automobile will be driven by software, the contemporary vehicle may now be configured to cut emissions to the greatest extent feasible. To begin, the ability to continuously monitor surrounding traffic signboards and respond with finely tuned braking and acceleration changes should allow traveling safely at faster speeds and with less headway between each vehicle, averting more accidents. This technology can also enhance fuel efficiency by accelerating and decelerating more smoothly than a human driver, increasing it by 4–10 percent. Additional benefits might be obtained by shortening the space between vehicles and boosting roadway capacity. Cars and trucks might become significantly lighter as the incidence of collisions decreases over time. This would further improve fuel economy. Many cities struggle to provide adequate infrastructure, such as suitable traffic signboards, a deficit that our AI car might help to address in part. Our AI technology can recognize poor-quality, illegible traffic signals and maneuver the car accordingly, functioning as a driver's assistant. Many seniors and persons with impairments are currently unable to drive. These persons can benefit from this technology.
To put it in a nutshell, there are countless benefits this project is aimed for. Some are listed below:
- Fewer accidents
- Less car theft as software-based vehicles can be tracked
- More people can use (e.g, old age, people with eyesight problems, those with disabilities)
- Reduced fuel and manufacturing cost
- Less environmental pollution
- Minimized human error
On your journey, you will be able to accomplish other things, such as work or play with your children, enhancing your productivity and enjoyment.
Technical Details of Final DeliverableThe concept of this project is to design and develop a Smart Automobile with the idea of AI-based signboard interpretation. So, for the real-time implementation of this project, its hardware design will be built on an acrylic sheet and the dimensions of this design will be 30x18. The control motor packages include gear motors (7.2 volts) for acceleration purposes and to allow the reduction from an initial high speed to a lower one. The motors will be further controlled through a microcontroller. In between the microcontroller and mentioned motors, a Buck converter will be used to step down the voltage to 7.2 volts.
SPECIFICATIONS OF OUR DC MOTORS:
Product Name : Gear Motor
Rated Voltage : DC 7.2V
Motor Body Size : 56 x 37mm/ 2.2" x 1.4" (L*D)
Weight : 206g
SPECIFICATIONS OF BUCK CONVERTER USED:
Module: LM2596
Input voltage:4.75-35V
Output voltage:1.25-26V (Adjustable)
Furthermore, a microcontroller will be used to control this hardware architecture. The Raspberry Pi combined with a camera, motor driver, and sensors, will take input from the program and provide an output to the motors, causing the car to move. There will be a wheel encoder that provides a speed reading of the car by tracking the number of revolutions per disk makes through the reflection of the infrared light emitted by the sensor.
Moving to software details, our project is based on artificial intelligence. AI is all about teaching robots to mimic human behavior, such as the human brain and its decision-making abilities. Machine Learning, on the other hand, provides statistical methods and algorithms that allow computers to learn automatically from their prior experiences and data, allowing the program to adjust its behavior accordingly. We are employing the Single Shot Detector machine learning algorithm.
The framework we chose for our project is TensorFlow. TensorFlow is an open-source end-to-end framework for building Machine Learning apps. TensorFlow provides excellent functionalities and services when compared to other popular deep learning frameworks. The model or algorithm on which we are training our dataset is Single Shot Detector(SSD). The base architecture along with SSD is MobileNet. This is the optimum choice for our project. In essence, the MobileNet base network acts as a feature extractor for the SSD layer which will then classify our Objects i.e., Traffic Signs. SSD on MobileNet has the highest mAP among the models targeted for real-time processing.
Final Deliverable of the Project HW/SW integrated systemCore Industry TransportationOther IndustriesCore Technology Artificial Intelligence(AI)Other TechnologiesSustainable Development Goals Good Health and Well-Being for People, Industry, Innovation and Infrastructure, Sustainable Cities and CommunitiesRequired Resources| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Total in (Rs) | 69110 | |||
| Raspberry Pi 4 (Microcontroller) | Equipment | 1 | 32000 | 32000 |
| L298N Motor Drivers | Equipment | 2 | 320 | 640 |
| 5mm Acrylic Sheet (30x18cm) | Equipment | 5 | 1500 | 7500 |
| Gear Motor with tire | Equipment | 4 | 1700 | 6800 |
| Raspberry Pi Camera Module | Equipment | 1 | 4500 | 4500 |
| LM2596S Buck Converter | Equipment | 1 | 520 | 520 |
| Lithium Ion Battery | Equipment | 4 | 250 | 1000 |
| Micro SD Memory Card (16 GB) | Equipment | 1 | 1000 | 1000 |
| 3.5in LCD for Raspberry Pi | Equipment | 1 | 2500 | 2500 |
| VL53L0X Time-of-Flight Sensor | Equipment | 1 | 1100 | 1100 |
| Resistor Sheet | Equipment | 1 | 400 | 400 |
| Wooden Stand | Equipment | 5 | 200 | 1000 |
| Report Printing | Miscellaneous | 6 | 1000 | 6000 |
| File | Miscellaneous | 2 | 50 | 100 |
| Prints | Miscellaneous | 1 | 600 | 600 |
| PCB Fabrication | Equipment | 1 | 1000 | 1000 |
| LCD 16x2 | Equipment | 1 | 250 | 250 |
| Nuts and Screws Pack | Equipment | 4 | 400 | 1600 |
| LED Strip | Equipment | 4 | 50 | 200 |
| Jumper wires | Equipment | 2 | 200 | 400 |