Omnidirectional 3D point clouds using dual Kinect sensors

Seokmin Yun, Jaewon Choi, Chee Sun Won

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

This paper proposes a registration method for two sets of point clouds obtained from dual Kinect V2 sensors, which are facing each other to capture omnidirectional 3D data of the objects located in between the two sensors. Our approach aims at achieving a handy registration without the calibration-assisting devices such as the checker board. Therefore, it is suitable in portable camera setting environments with frequent relocations. The basic idea of the proposed registration method is to exploit the skeleton information of the human body provided by the two Kinect V2 sensors. That is, a set of correspondence pairs in skeleton joints of human body detected by Kinect V2 sensors is used to determine the calibration matrices, then Iterative Closest Point (ICP) algorithm is adopted for finely tuning the calibration parameters. The performance of the proposed method is evaluated by constructing 3D point clouds for human bodies and by making geometric measurements for cylindrical testing objects.

Original languageEnglish
Article number6295956
JournalJournal of Sensors
Volume2019
DOIs
StatePublished - 2019

Fingerprint

Dive into the research topics of 'Omnidirectional 3D point clouds using dual Kinect sensors'. Together they form a unique fingerprint.

Cite this