On the Convergence of Stochastic Variational Inference in Bayesian Networks
We highlight a pitfall when applying stochastic variational inference to general Bayesian networks. For global random variables approximated by an exponential family distribution, natural gradient steps, commonly starting from a unit length step size, are averaged to convergence. This useful insight...
Saved in:
Main Author: | |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
16-07-2015
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We highlight a pitfall when applying stochastic variational inference to
general Bayesian networks. For global random variables approximated by an
exponential family distribution, natural gradient steps, commonly starting from
a unit length step size, are averaged to convergence. This useful insight into
the scaling of initial step sizes is lost when the approximation factorizes
across a general Bayesian network, and care must be taken to ensure practical
convergence. We experimentally investigate how much of the baby (well-scaled
steps) is thrown out with the bath water (exact gradients). |
---|---|
DOI: | 10.48550/arxiv.1507.04505 |