POSTER: Cutting the Fat: Speeding Up RBM for Fast Deep Learning Through Generalized Redundancy Elimination

Restricted Boltzmann Machine (RBM) is the building block of Deep Belief Nets and other deep learning tools. Fast learning and prediction are both essential for practical usage of RBM-based machine learning techniques. This paper presents a concept named generalized redundancy elimination to avoid mo...

Full description

Saved in:
Bibliographic Details
Published in:2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT) pp. 154 - 155
Main Authors: Lin Ning, Pittman, Randall, Xipeng Shen
Format: Conference Proceeding
Language:English
Published: IEEE 01-09-2017
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Restricted Boltzmann Machine (RBM) is the building block of Deep Belief Nets and other deep learning tools. Fast learning and prediction are both essential for practical usage of RBM-based machine learning techniques. This paper presents a concept named generalized redundancy elimination to avoid most of the the computations required in RBM learning and prediction without changing the results. It consists of two optimization techniques. The first is called bounds-based filtering, which, through triangular inequality, replaces expensive calculations of many vector dot products with fast bounds calculations. The second is delta product, which effectively detects and avoids many repeated calculations in the core operation of RBM, Gibbs Sampling. The optimizations are applicable to both the standard contrastive divergence learning algorithm and its variations. In addition, the paper presents how to address some complexities these optimizations create for them to be used together and for them to be implemented efficiently on massively parallel processors. Results show that the optimizations can produce several-fold (up to 3X for training and 5.3X for prediction) speedups.
DOI:10.1109/PACT.2017.36