Facial action units for training convolutional neural networks

Trinh Thi Doan Pham, Chee Sun Won

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

This paper deals with the problem of training convolutional neural networks (CNNs) with facial action units (AUs). In particular, we focus on the imbalance problem of the training datasets for facial emotion classification. Since training a CNN with an imbalanced dataset tends to yield a learning bias toward the major classes and eventually leads to deterioration in the classification accuracy, it is required to increase the number of training images for the minority classes to have evenly distributed training images over all classes. However, it is difficult to find the images with a similar facial emotion for the oversampling. In this paper, we propose to use the AU features to retrieve an image with a similar emotion. The query selection from the minority class and the AU-based retrieval processes repeat until the numbers of training data over all classes are balanced. Also, to improve the classification accuracy, the AU features are fused with the CNN features to train a support vector machine (SVM) for final classification. The experiments have been conducted on three imbalanced facial image datasets, RAF-DB, FER2013, and ExpW. The results demonstrate that the CNNs trained with the AU features improve the classification accuracy by 3%-4%.

Original languageEnglish
Article number8732356
Pages (from-to)77816-77824
Number of pages9
JournalIEEE Access
Volume7
DOIs
StatePublished - 2019

Keywords

  • Convolutional neural network
  • data imbalance
  • data oversampling
  • facial action units
  • facial emotion recognition

Fingerprint

Dive into the research topics of 'Facial action units for training convolutional neural networks'. Together they form a unique fingerprint.

Cite this