Compact class-conditional domain invariant learning for multi-class domain adaptation

Woojin Lee, Hoki Kim, Jaewook Lee

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

Neural network-based models have recently shown excellent performance in various kinds of tasks. However, a large amount of labeled data is required to train deep networks, and the cost of gathering labeled training data for every kind of domain is prohibitively expensive. Domain adaptation tries to solve this problem by transferring knowledge from labeled source domain data to unlabeled target domain data. Previous research tried to learn domain-invariant features of source and target domains to address this problem, and this approach has been used as a key concept in various methods. However, domain-invariant features do not mean that a classifier trained on source data can be directly applied to target data because it does not guarantee that data distribution of the same classes will be aligned across two domains. In this paper, we present novel generalization upper bounds for domain adaptation that motivates the need for class-conditional domain invariant learning. Based on this theoretical framework, we then propose a class-conditional domain invariant learning method that can learn a feature space in which features in the same class are expected to be mapped nearby. We empirically experimented that our model showed state-of-the-art performance on standard datasets and showed effectiveness by visualization of latent space.

Original languageEnglish
Article number107763
JournalPattern Recognition
Volume112
DOIs
StatePublished - Apr 2021

Keywords

  • Class-conditional domain invariant learning
  • Domain adaptation
  • Generalization bound
  • PAC learning complexity
  • Transfer Learning

Fingerprint

Dive into the research topics of 'Compact class-conditional domain invariant learning for multi-class domain adaptation'. Together they form a unique fingerprint.

Cite this