Discriminative feature alignment: Improving transferability of unsupervised domain adaptation by Gaussian-guided latent alignment
•We propose a novel alignment method to construct a common feature space under the guidance of a Gaussian prior for UDA.•We introduce a new method to align two distributions by minimizing the direct L1-distance between the decoded samples.•The proposed work achieves state-of-the-art performance on b...
Saved in:
Published in: | Pattern recognition Vol. 116; p. 107943 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier Ltd
01-08-2021
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •We propose a novel alignment method to construct a common feature space under the guidance of a Gaussian prior for UDA.•We introduce a new method to align two distributions by minimizing the direct L1-distance between the decoded samples.•The proposed work achieves state-of-the-art performance on both digit and object classification tasks.
In this paper, we focus on the unsupervised domain adaptation problem where an approximate inference model is to be learned from a labeled data domain and expected to generalize well to an unlabeled domain. The success of unsupervised domain adaptation largely relies on the cross-domain feature alignment. Previous work has attempted to directly align features by classifier-induced discrepancies. Nevertheless, a common feature space cannot always be learned via this direct feature alignment especially when large domain gaps exist. To solve this problem, we introduce a Gaussian-guided latent alignment approach to align the latent feature distributions of the two domains under the guidance of a prior. In such an indirect way, the distributions over the samples from the two domains will be constructed on a common feature space, i.e., the space of the prior, which promotes better feature alignment. To effectively align the target latent distribution with this prior distribution, we also propose a novel unpaired L1-distance by taking advantage of the formulation of the encoder-decoder. The extensive evaluations on nine benchmark datasets validate the superior knowledge transferability through outperforming state-of-the-art methods and the versatility of the proposed method by improving the existing work significantly. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2021.107943 |