
Speed Bump Detection for Enhanced Driving Safety using Deep Learning
Speed Bump Detection for Enhanced Driving Safety using Deep Learning
IEEE BASE PAPER TITLE:
Speed Bump Detection Model for Advanced Driver Assistance System
IEEE BASE PAPER ABSTRACT:
Since the invention of the automobiles, irrespective of whether the internal combustion (IC) engines or electric vehicles, driving them on the road without causing any accidents has been a challenge for many years for the mankind. For the autonomous vehicles, identifying the obstacles and manoeuvring the vehicle in the traffic is another challenge. To reduce the speed of the vehicles wherever essential, the speed bumps are provided. This paper discusses the detection of speed bump using YOLOv8 object detection framework as a part of Advanced Driver Assistance System (ADAS). YOLOv8 object detection framework is used in the development of the model. The trained model has achieved 77.03% precision, 74.92% recall and F1-score is 75.9609. The trained model is deployed in Raspberry Pi 4 Model B and speed bump detection is verified.
PROJECT OUTPUT VIDEO:
ALGORITHM / MODEL USED:
YOLOv11 Architecture.
OUR PROPOSED PROJECT ABSTRACT:
Road irregularities such as speed bumps play a crucial role in traffic calming, yet their inadequate visibility – especially under poor lighting, weather conditions, or unfamiliar roads, can lead to vehicle damage and road accidents. With the growing adoption of intelligent transportation systems and driver-assistance technologies, there is a strong need for automated and reliable detection of speed bumps to enhance driving safety. This project addresses this need by proposing a deep learning–based speed bump detection system that leverages real-time computer vision techniques for accurate and timely identification.
“Speed Bump Detection for Enhanced Driving Safety using Deep Learning” system is implemented using Python as the core programming language, with HTML, CSS, and JavaScript for the front-end interface and Flask as the web framework to integrate the detection engine with user interaction modules. The proposed approach employs the YOLOv11 architecture for efficient object detection, enabling fast and precise localization of speed bumps in road scenes. The model is trained and evaluated on a custom dataset consisting of 1,927 training images, 217 validation images, and 96 testing images, ensuring robust learning and reliable generalization.
The system supports two operational modes: Image Mode and Live Webcam Mode. In Image Mode, users can upload road images, and the system detects and highlights the presence of speed bumps. In Live Webcam Mode, real-time video streams from a camera are processed continuously, allowing dynamic detection and on-screen visualization of speed bumps as the vehicle moves. This dual-mode functionality enhances usability across both offline analysis and real-time driving scenarios.
Experimental results demonstrate the effectiveness of the proposed system, achieving a precision of 81.52%, recall of 79.18%, mAP@0.5 of 85.96%, and mAP@0.5:0.95 of 50.22%, with an overall detection accuracy of approximately 85.9%. Comprehensive performance analysis is presented using precision–recall metrics and visualization graphs, confirming the model’s strong detection capability and suitability for enhanced driving safety applications.
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
- System : Pentium i3 Processor.
- Hard Disk : 20 GB.
- Monitor : 15’’ LED.
- Input Devices : Keyboard, Mouse.
- Ram : 8 GB.
- Camera : Web-camera.
SOFTWARE REQUIREMENTS:
- Operating System : Windows 10 / 11.
- Coding Language : Python 3.12.0.
- Web Framework : Flask.
- Frontend : HTML, CSS, JavaScript.
REFERENCE:
K Gopala Sharma, Nirmala Paramanandham, “Speed Bump Detection Model for Advanced Driver Assistance System”, IEEE Conference 2025.
👉CLICK HERE TO BUY THIS PROJECT “Speed Bump Detection for Enhanced Driving Safety using Deep Learning” SOURCE CODE👈
Frequently Asked Questions (FAQ’s) and Answers
The objective of this project is to automatically detect speed bumps from road images and live video streams using deep learning, thereby enhancing driving safety by providing early and accurate identification of speed bumps.
Speed bumps are often poorly visible due to low lighting, weather conditions, or unfamiliar roads. Failure to detect them in advance can lead to vehicle damage or accidents. Automated detection helps drivers take timely action and improves road safety.
The system uses a deep learning–based object detection approach implemented with the YOLOv11 model, which detects and localizes speed bumps using bounding boxes in images and video frames.
The backend is developed using Python, the frontend is built using HTML, CSS, and JavaScript, and the system is integrated using the Flask. The deep learning model is trained and evaluated using standard Python libraries.
The dataset consists of annotated road images containing speed bumps and non-speed bump scenarios. It includes 1,927 training images, 217 validation images, and 96 testing images.
The system supports two modes: • Image Mode: Users upload an image, and the system detects speed bumps in the image. • Live Webcam Mode: The system processes real-time video from a webcam and detects speed bumps continuously
Detected speed bumps are highlighted using bounding boxes on the image or video frame. The results are displayed directly on the web interface along with confidence information.
The system is evaluated using standard object detection metrics such as precision, recall, mean Average Precision at IoU 0.5 (mAP@0.5), and mean Average Precision at IoU 0.5–0.95 (mAP@0.5:0.95). Performance graphs are also provided for analysis.
The system achieves high detection performance with strong precision and recall values, along with an overall detection accuracy of approximately 85.9%, indicating reliable speed bump detection.
Yes, the use of a single-stage detection model allows the system to perform real-time detection in live webcam mode, making it suitable for dynamic driving scenarios.
Yes, the system provides a web-based interface that is easy to use. Users can upload images, activate live detection, and view results and performance metrics without requiring technical expertise.
No special hardware is required. The system can run on a standard computer with a webcam for live detection, making it cost-effective and easy to deploy. Q1. What is the objective of this project?
Q2. Why is speed bump detection important for driving safety?
Q3. What technology is used for speed bump detection?
Q4. What programming languages and tools are used in this project?
Q5. What dataset is used for training and testing?
Q6. What are the operational modes supported by the system?
Q7. How does the system display detection results?
Q8. What performance metrics are used to evaluate the model?
Q9. What level of accuracy does the system achieve?
Q10. Can this system work in real time?
Q11. Is the system user-friendly?
Q12. Does the system require special hardware?



