Abstract
Autonomous driving (AD) perception technology integrates images from variously positioned cameras to comprehend the surrounding environment. To accurately perceive these surroundings, it is essential to know both the precise pose of each camera and their exact alignment. Traditional online calibration methods are inadequate for AD perception because they either overlook the alignment between cameras with different fields of view (FoVs) or only consider alignment among cameras with the same FoV. This article introduces a spatiotemporal calibration method that analyzes both spatial and temporal information of cameras to estimate the poses of all cameras and their interrelationships without any restrictions on the camera mounting poses and FoVs. Temporal and spatial data are used separately to estimate camera poses, and the outcomes are merged to determine the optimized camera positions for seamless multicamera fusion (MCF). To assess the effectiveness of our proposed method, we compared it with an existing method using a specialized calibration facility and found that our results closely match those of the facility. Moreover, real-world driving tests show that our method surpasses existing methods that rely on a specialized calibration facility.
Original language | English |
---|---|
Pages (from-to) | 7227-7241 |
Number of pages | 15 |
Journal | IEEE Sensors Journal |
Volume | 25 |
Issue number | 4 |
DOIs | |
State | Published - 2025 |
Keywords
- Autonomous driving (AD)
- multicamera calibration
- pose estimation
- pose graph
- spatiotemporal calibration