DGU-HAU: A Dataset for 3D Human Action Analysis on Utterances

Jiho Park, Kwangryeol Park, Dongho Kim

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Constructing diverse and complex multi-modal datasets is crucial for advancing human action analysis research, providing ground truth annotations for training deep learning networks, and enabling the development of robust models across real-world scenarios. Generating natural and contextually appropriate nonverbal gestures is essential for enhancing immersive and effective human–computer interactions in various applications. These applications include video games, embodied virtual assistants, and conversations within a metaverse. However, existing speech-related human datasets are focused on style transfer, so they have limitations that make them unsuitable for 3D human action analysis studies, such as human action recognition and generation. Therefore, we introduce a novel multi-modal dataset, DGU-HAU, a dataset for 3D human action on utterances that commonly occurs during daily life. We validate the dataset using a human action generation model, Action2Motion (A2M), a state-of-the-art 3D human action generation model.

Original languageEnglish
Article number4793
JournalElectronics (Switzerland)
Volume12
Issue number23
DOIs
StatePublished - Dec 2023

Keywords

  • 3D human action analysis
  • human activity understanding
  • motion capture
  • multi-modal dataset
  • utterance dataset

Fingerprint

Dive into the research topics of 'DGU-HAU: A Dataset for 3D Human Action Analysis on Utterances'. Together they form a unique fingerprint.

Cite this