TY - JOUR
T1 - Development of a Non-Contact Sensor System for Converting 2D Images into 3D Body Data
T2 - A Deep Learning Approach to Monitor Obesity and Body Shape in Individuals in Their 20s and 30s
AU - Lee, Ji Yong
AU - Kwon, Kihyeon
AU - Kim, Changgyun
AU - Youm, Sekyoung
N1 - Publisher Copyright:
© 2024 by the authors.
PY - 2024/1
Y1 - 2024/1
N2 - This study demonstrates how to generate a three-dimensional (3D) body model through a small number of images and derive body values similar to the actual values using generated 3D body data. In this study, a 3D body model that can be used for body type diagnosis was developed using two full-body pictures of the front and side taken with a mobile phone. For data training, 400 3D body datasets (male: 200, female: 200) provided by Size Korea were used, and four models, i.e., 3D recurrent reconstruction neural network, point cloud generative adversarial network, skinned multi-person linear model, and pixel-aligned impact function for high-resolution 3D human digitization, were used. The models proposed in this study were analyzed and compared. A total of 10 men and women were analyzed, and their corresponding 3D models were verified by comparing 3D body data derived from 2D image inputs with those obtained using a body scanner. The model was verified through the difference between 3D data derived from the 2D image and those derived using an actual body scanner. Unlike the 3D generation models that could not be used to derive the body values in this study, the proposed model was successfully used to derive various body values, indicating that this model can be implemented to identify various body types and monitor obesity in the future.
AB - This study demonstrates how to generate a three-dimensional (3D) body model through a small number of images and derive body values similar to the actual values using generated 3D body data. In this study, a 3D body model that can be used for body type diagnosis was developed using two full-body pictures of the front and side taken with a mobile phone. For data training, 400 3D body datasets (male: 200, female: 200) provided by Size Korea were used, and four models, i.e., 3D recurrent reconstruction neural network, point cloud generative adversarial network, skinned multi-person linear model, and pixel-aligned impact function for high-resolution 3D human digitization, were used. The models proposed in this study were analyzed and compared. A total of 10 men and women were analyzed, and their corresponding 3D models were verified by comparing 3D body data derived from 2D image inputs with those obtained using a body scanner. The model was verified through the difference between 3D data derived from the 2D image and those derived using an actual body scanner. Unlike the 3D generation models that could not be used to derive the body values in this study, the proposed model was successfully used to derive various body values, indicating that this model can be implemented to identify various body types and monitor obesity in the future.
KW - body generation confidence
KW - generative adversarial network
KW - human shape estimation
KW - obesity
KW - synthetic dataset
UR - https://www.scopus.com/pages/publications/85181917116
U2 - 10.3390/s24010270
DO - 10.3390/s24010270
M3 - Article
C2 - 38203129
AN - SCOPUS:85181917116
SN - 1424-3210
VL - 24
JO - Sensors
JF - Sensors
IS - 1
M1 - 270
ER -