Predicting Post-Liposuction Body Shape Using RGB Image-to-Image Translation

Research output: Contribution to journalArticlepeer-review

Abstract

The growing interest in weight management has elevated the popularity of liposuction. Individuals deciding whether to undergo liposuction must rely on a doctor’s subjective projections or surgical outcomes for other people to gauge how their own body shape will change. However, such predictions may not be accurate. Although deep learning technology has recently achieved breakthroughs in analyzing medical images and rendering diagnoses, predicting surgical outcomes based on medical images outside clinical settings remains challenging. Hence, this study aimed to develop a method for predicting body shape changes after liposuction using only images of the subject’s own body. To achieve this, we utilize data augmentation based on a conditional continuous Generative Adversarial Network (CcGAN), which generates realistic synthetic data conditioned on continuous variables. Additionally, we modify the loss function of Pix2Pix—a supervised image-to-image translation technique based on Generative Adversarial Networks (GANs)—to enhance prediction quality. Our approach quantitatively and qualitatively demonstrates that accurate, intuitive predictions before liposuction are possible.

Original languageEnglish
Article number4787
JournalApplied Sciences (Switzerland)
Volume15
Issue number9
DOIs
StatePublished - May 2025

Keywords

  • deep learning
  • GAN
  • image-to-image translation
  • Pix2Pix
  • prediction of liposuction outcome

Fingerprint

Dive into the research topics of 'Predicting Post-Liposuction Body Shape Using RGB Image-to-Image Translation'. Together they form a unique fingerprint.

Cite this