Constructing the Matrix Multilayer Perceptron and its Application to the VAE
Like most learning algorithms, the multilayer perceptrons (MLP) is designed to learn a vector of parameters from data. However, in certain scenarios we are interested in learning structured parameters (predictions) in the form of symmetric positive definite matrices. Here, we introduce a variant of...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
04-02-2019
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Like most learning algorithms, the multilayer perceptrons (MLP) is designed
to learn a vector of parameters from data. However, in certain scenarios we are
interested in learning structured parameters (predictions) in the form of
symmetric positive definite matrices. Here, we introduce a variant of the MLP,
referred to as the matrix MLP, that is specialized at learning symmetric
positive definite matrices. We also present an application of the model within
the context of the variational autoencoder (VAE). Our formulation of the VAE
extends the vanilla formulation to the cases where the recognition and the
generative networks can be from the parametric family of distributions with
dense covariance matrices. Two specific examples are discussed in more detail:
the dense covariance Gaussian and its generalization, the power exponential
distribution. Our new developments are illustrated using both synthetic and
real data. |
---|---|
DOI: | 10.48550/arxiv.1902.01182 |