Disentangling Latent Factors of Variational Auto-Encoder with Whitening
After deep generative models were successfully applied to image generation tasks, learning disentangled latent variables of data has become a crucial part of deep generative model research. Many models have been proposed to learn an interpretable and factorized representation of latent variable by m...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
08-11-2018
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | After deep generative models were successfully applied to image generation
tasks, learning disentangled latent variables of data has become a crucial part
of deep generative model research. Many models have been proposed to learn an
interpretable and factorized representation of latent variable by modifying
their objective function or model architecture. To disentangle the latent
variable, some models show lower quality of reconstructed images and others
increase the model complexity which is hard to train. In this paper, we propose
a simple disentangling method based on a traditional whitening process. The
proposed method is applied to the latent variables of variational auto-encoder
(VAE), although it can be applied to any generative models with latent
variables. In experiment, we apply the proposed method to simple VAE models and
experiment results confirm that our method finds more interpretable factors
from the latent space while keeping the reconstruction error the same as the
conventional VAE's error. |
---|---|
DOI: | 10.48550/arxiv.1811.03444 |