TY - GEN
T1 - 3D Reconstruction using a Sparse Laser Scanner and a Single Camera for Outdoor Autonomous Vehicle
AU - Lee, Honggu
AU - Song, Soohwan
AU - Jo, Sungho
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/12/22
Y1 - 2016/12/22
N2 - This paper presents a 3D scene reconstruction method for autonomous vehicle driving in a wide range of outdoor environments. Autonomous vehicles, most of which currently employ laser and image sensors, are required to have systems for object detection, obstacle avoidance, navigation etc. The one of the most important pieces of information for these systems is an accurate dense 3D depth map. However, range data is much sparser than image data, thus the challenging problem is to reconstruct a dense depth map using sparse range and image data. Here we propose a novel approach to fuse these different types of sensor data to reconstruct 3D scenes which maintains the shape of local objects. Our method features two main phases: The local range modeling phase and 3D depth map reconstruction phase. In the local range modeling phase we interpolate 3D points from the laser scanner using Gaussian Process regression. It estimates 3D measurements across the outdoor environment and accommodates for defective sensor information. In the reconstruction phase, we fuse an image and interpolated points to build a 3D depth map and optimize based on a Markov Random Field. It provides a depth map corresponding to all image pixels. Qualitative and time complexity results show that our approach is robust and fast enough to demonstrate in real-Time for an autonomous vehicle in complex urban scenes.
AB - This paper presents a 3D scene reconstruction method for autonomous vehicle driving in a wide range of outdoor environments. Autonomous vehicles, most of which currently employ laser and image sensors, are required to have systems for object detection, obstacle avoidance, navigation etc. The one of the most important pieces of information for these systems is an accurate dense 3D depth map. However, range data is much sparser than image data, thus the challenging problem is to reconstruct a dense depth map using sparse range and image data. Here we propose a novel approach to fuse these different types of sensor data to reconstruct 3D scenes which maintains the shape of local objects. Our method features two main phases: The local range modeling phase and 3D depth map reconstruction phase. In the local range modeling phase we interpolate 3D points from the laser scanner using Gaussian Process regression. It estimates 3D measurements across the outdoor environment and accommodates for defective sensor information. In the reconstruction phase, we fuse an image and interpolated points to build a 3D depth map and optimize based on a Markov Random Field. It provides a depth map corresponding to all image pixels. Qualitative and time complexity results show that our approach is robust and fast enough to demonstrate in real-Time for an autonomous vehicle in complex urban scenes.
UR - http://www.scopus.com/inward/record.url?scp=85010040235&partnerID=8YFLogxK
U2 - 10.1109/ITSC.2016.7795619
DO - 10.1109/ITSC.2016.7795619
M3 - Conference contribution
AN - SCOPUS:85010040235
T3 - IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
SP - 629
EP - 634
BT - 2016 IEEE 19th International Conference on Intelligent Transportation Systems, ITSC 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 19th IEEE International Conference on Intelligent Transportation Systems, ITSC 2016
Y2 - 1 November 2016 through 4 November 2016
ER -