Real-time terrain reconstruction using 3D flag map for point clouds

Wei Song, Kyungeun Cho

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

Mobile robot operators need to make quick decisions based on information about the robot’s surrounding environment. This study proposes a graphics processing unit (GPU)-based terrain modeling system for large-scale LiDAR (Light Detection And Ranging) dataset visualization using a voxel map and a textured mesh. A 3D flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. The sensed 3D point clouds are quantized into regular 3D grids that are allocated in the GPU memory to remove redundant spatial and temporal points. Subsequently, the sensed vertices are segmented as ground and non-ground classes. The ground indices are rendered using a textured mesh to represent the ground surface, and the non-ground indices, using a colored voxel map by a particle rendering method. The proposed approach was tested using a mobile robot equipped with a LiDAR sensor, video camera, GPS receiver, and gyroscope. The simulation was evaluated through a test in an outdoor environment containing trees and buildings, demonstrating the real-time visualization performance of the proposed method in a large-scale environment.

Original languageEnglish
Pages (from-to)3459-3475
Number of pages17
JournalMultimedia Tools and Applications
Volume74
Issue number10
DOIs
StatePublished - 16 May 2015

Keywords

  • GPU programming
  • Large-scale point cloud
  • Mobile robot
  • Real-time visualization
  • Terrain reconstruction

Fingerprint

Dive into the research topics of 'Real-time terrain reconstruction using 3D flag map for point clouds'. Together they form a unique fingerprint.

Cite this