Abstract
In recent years, with the commercialization of three-dimensional (3D) scanners, there is an increasing demand for automated techniques that can extract anthropometric data accurately and swiftly from 3D human body scans. With advancement in computer vision and machine learning, researchers have increasingly focused on developing automated anthropometric data extraction technique. In this paper, we propose a deep learning method for automatic anthropometric landmark extraction from 3D human scans. We adopt a coarse-to-fine approach consists of a global detection stage and a local refinement stage to fully utilize the original geometric information of input scan. Moreover, we introduce a novel geodesic heatmap that effectively captures the point distribution of 3D shapes, even in the presence of variations in scanning pose. As a result, our method provides the lowest average detection error on the SHREC'14 dataset over the six anthropometric landmarks, demonstrating a maximum error reduction of 76.14%. Additionally, we created a dataset consisting of human scans with various poses to demonstrate robustness of our method. Thanks to our new datasets, our end-to-end strategy showed its effectiveness to various human postures without any predefined features and templates.
Original language | English |
---|---|
Pages (from-to) | 197035-197047 |
Number of pages | 13 |
Journal | IEEE Access |
Volume | 12 |
DOIs | |
State | Published - 2024 |
Keywords
- 3D point cloud
- Anthropometry
- deep learning
- landmark detection