Bayesian Neural Network via Stochastic Gradient Descent
The goal of bayesian approach used in variational inference is to minimize the KL divergence between variational distribution and unknown posterior distribution. This is done by maximizing the Evidence Lower Bound (ELBO). A neural network is used to parametrize these distributions using Stochastic G...
Saved in:
Main Author: | |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
04-06-2020
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The goal of bayesian approach used in variational inference is to minimize
the KL divergence between variational distribution and unknown posterior
distribution. This is done by maximizing the Evidence Lower Bound (ELBO). A
neural network is used to parametrize these distributions using Stochastic
Gradient Descent. This work extends the work done by others by deriving the
variational inference models. We show how SGD can be applied on bayesian neural
networks by gradient estimation techniques. For validation, we have tested our
model on 5 UCI datasets and the metrics chosen for evaluation are Root Mean
Square Error (RMSE) error and negative log likelihood. Our work considerably
beats the previous state of the art approaches for regression using bayesian
neural networks. |
---|---|
DOI: | 10.48550/arxiv.2006.08453 |