Efficient Learning of Data Distribution using Simultaneous Recurrent Belief Network
Efficient learning of data distribution is necessary for many different applications such as classification, recognition, decision making and segmentation. Generative probabilistic models have been extensively used to effectively learn distribution of the input data. Moreover, generative models are...
Saved in:
Published in: | 2018 International Joint Conference on Neural Networks (IJCNN) pp. 1 - 6 |
---|---|
Main Authors: | , , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
01-07-2018
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Efficient learning of data distribution is necessary for many different applications such as classification, recognition, decision making and segmentation. Generative probabilistic models have been extensively used to effectively learn distribution of the input data. Moreover, generative models are considered as the first and foremost building blocks for developing highly expressive deep neural network architectures. Throughout the past decade, researchers have developed several undirected and directed probabilistic generative models. The two most popular generative models are Restricted Boltzman Machine (RBM) and Sigmoid Belief Network (SBN). Both these models utilize a two layer architecture with feedforward information processing inspired by the biological systems for learning data distributions. However, none of these models exhibit another well-known property found in biology in the form of recurrent neuronal information processing, which maybebeneficial for learning more complex data distribution. Consequently, this paper, for the first time in literature, proposes a directed recurrent generative model known as Simultaneous Recurrent Belief Network (SRBN) for efficiently learning the distribution of the input data. The efficacy of the proposed SRBN model is evaluated using two benchmark datasets: MNIST and Caltech 101 Silhouettes. Our experimental results suggest the SRBN model shows improved data distribution learning performance while utilizing minimal trainable parameters. |
---|---|
ISSN: | 2161-4407 |
DOI: | 10.1109/IJCNN.2018.8489292 |