Spatiotemporal Calibration for Autonomous Driving Multicamera Perception

Jung Hyun Lee, Taek Hyun Ko, Dong Wook Lee

Research output: Contribution to journalArticlepeer-review

Abstract

Autonomous driving (AD) perception technology integrates images from variously positioned cameras to comprehend the surrounding environment. To accurately perceive these surroundings, it is essential to know both the precise pose of each camera and their exact alignment. Traditional online calibration methods are inadequate for AD perception because they either overlook the alignment between cameras with different fields of view (FoVs) or only consider alignment among cameras with the same FoV. This article introduces a spatiotemporal calibration method that analyzes both spatial and temporal information of cameras to estimate the poses of all cameras and their interrelationships without any restrictions on the camera mounting poses and FoVs. Temporal and spatial data are used separately to estimate camera poses, and the outcomes are merged to determine the optimized camera positions for seamless multicamera fusion (MCF). To assess the effectiveness of our proposed method, we compared it with an existing method using a specialized calibration facility and found that our results closely match those of the facility. Moreover, real-world driving tests show that our method surpasses existing methods that rely on a specialized calibration facility.

Original languageEnglish
Pages (from-to)7227-7241
Number of pages15
JournalIEEE Sensors Journal
Volume25
Issue number4
DOIs
StatePublished - 2025

Keywords

  • Autonomous driving (AD)
  • multicamera calibration
  • pose estimation
  • pose graph
  • spatiotemporal calibration

Fingerprint

Dive into the research topics of 'Spatiotemporal Calibration for Autonomous Driving Multicamera Perception'. Together they form a unique fingerprint.

Cite this