Blending Face Details: Synthesizing a Face Using Multiscale Face Models

Seung Hyun Yoon, John Lewis, Taehyun Rhee

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Creating realistic 3D face models is a challenging problem in computer graphics because humans are so sensitive to facial abnormalities. The authors propose a method to synthesize a 3D face model using weighted blending of multiscale details from different face models. Using multiscale continuous displacement maps (CDMs), they achieve full correspondences across multiple scales in the parameter space. Their results demonstrate detail transfer across faces with highly different proportions, such as between humans and nonhuman creatures. An artist evaluation also indicated the proposed approach is intuitive and easy to use.

Original languageEnglish
Article number8103313
Pages (from-to)65-75
Number of pages11
JournalIEEE Computer Graphics and Applications
Volume37
Issue number6
DOIs
StatePublished - 1 Nov 2017

Keywords

  • blendshapes
  • computer graphics
  • continuous displacement maps
  • face modeling
  • multilevel b-spline
  • multiscale face model
  • parameterization

Fingerprint

Dive into the research topics of 'Blending Face Details: Synthesizing a Face Using Multiscale Face Models'. Together they form a unique fingerprint.

Cite this