Article contents
Multi-Sensor Data Simulation and Object Detection: Integrating Cameras, LiDAR, Radar, and Depth Estimation for Enhanced 3D Analysis
Abstract
The integration of data from cameras, Light Detection and Ranging (LiDARs), and radars provides a highly robust mechanism for detecting and tracking objects in autonomous systems. Each sensor offers unique advantages—cameras provide rich visual data, LiDARs ensure accurate depth information, and radars are effective under adverse weather conditions. This project combines these data sources through multi-sensor fusion techniques to achieve superior object detection and distance estimation. Using a YOLO-based object detection model alongside stereo vision for depth estimation, the system simulates multi-sensor data and offers real-time 3D visualization. The approach significantly enhances detection accuracy and spatial interpretation compared to single-sensor methods, paving the way for safer and more efficient autonomous vehicles and robotic systems.
 
				         Journal highlights
 Journal highlights
 Aims & scope
 Aims & scope Call for Papers
 Call for Papers Article Processing Charges
 Article Processing Charges Publications Ethics
 Publications Ethics Google Scholar Citations
 Google Scholar Citations Recruitment
 Recruitment 
 