From RGB to Reliable Road Maps: Pseudo-LiDAR Enhanced Domain Adaptive Detection
Abstract
Camera-LiDAR fusion has shown great promise in advancing road detection for autonomous driving, combining the semantic richness of RGB imagery with the depth accuracy of LiDAR. However, its practical deployment faces two main challenges: (1) the high cost and limited accessibility of LiDAR sensors hinder their large-scale adoption, and (2) the scarcity of labeled data in unseen target domains limits the generalization capability of supervised methods. To overcome these limitations, we propose SPADA-Road, an unsupervised framework that integrates Superpixel-guided segmentation with a novel Pseudo-LiDAR (PL) generation module. Our PL module synthesizes depth cues from monocular RGB inputs, effectively augmenting training data without requiring physical LiDAR sensors. To enable robust cross-domain generalization, we adopt an adversarial domain adaptation strategy that aligns feature distributions between labeled source and unlabeled target domains. We evaluate SPADA-Road on the KITTI Road dataset for training, and validate its performance on two LiDAR-free benchmarks—Cityscapes and CamVid. Extensive experiments demonstrate that our method achieves superior performance compared to several state-of-the-art baselines, highlighting its effectiveness in LiDAR-free and label-scarce scenarios.
Related articles
Related articles are currently not available for this article.