Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML
Mach. Learn.: Sci. Technol. 2, 015001 (2020) We present the implementation of binary and ternary neural networks in the hls4ml library, designed to automatically convert deep neural network models to digital circuits with FPGA firmware. Starting from benchmark models trained with floating point prec...
Saved in:
Main Authors: | , , , , , , , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
29-06-2020
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Mach. Learn.: Sci. Technol. 2, 015001 (2020) We present the implementation of binary and ternary neural networks in the
hls4ml library, designed to automatically convert deep neural network models to
digital circuits with FPGA firmware. Starting from benchmark models trained
with floating point precision, we investigate different strategies to reduce
the network's resource consumption by reducing the numerical precision of the
network parameters to binary or ternary. We discuss the trade-off between model
accuracy and resource consumption. In addition, we show how to balance between
latency and accuracy by retaining full precision on a selected subset of
network components. As an example, we consider two multiclass classification
tasks: handwritten digit recognition with the MNIST data set and jet
identification with simulated proton-proton collisions at the CERN Large Hadron
Collider. The binary and ternary implementation has similar performance to the
higher precision implementation while using drastically fewer FPGA resources. |
---|---|
Bibliography: | FERMILAB-PUB-20-167-PPD-SCD |
DOI: | 10.48550/arxiv.2003.06308 |