A Hardware-Software Blueprint for Flexible Deep Learning Specialization
Specialized Deep Learning (DL) acceleration stacks, designed for a specific set of frameworks, model architectures, operators, and data types, offer the allure of high performance while sacrificing flexibility. Changes in algorithms, models, operators, or numerical systems threaten the viability of...
Saved in:
Main Authors: | , , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
11-07-2018
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Specialized Deep Learning (DL) acceleration stacks, designed for a specific
set of frameworks, model architectures, operators, and data types, offer the
allure of high performance while sacrificing flexibility. Changes in
algorithms, models, operators, or numerical systems threaten the viability of
specialized hardware accelerators. We propose VTA, a programmable deep learning
architecture template designed to be extensible in the face of evolving
workloads. VTA achieves this flexibility via a parametrizable architecture,
two-level ISA, and a JIT compiler. The two-level ISA is based on (1) a task-ISA
that explicitly orchestrates concurrent compute and memory tasks and (2) a
microcode-ISA which implements a wide variety of operators with single-cycle
tensor-tensor operations. Next, we propose a runtime system equipped with a JIT
compiler for flexible code-generation and heterogeneous execution that enables
effective use of the VTA architecture. VTA is integrated and open-sourced into
Apache TVM, a state-of-the-art deep learning compilation stack that provides
flexibility for diverse models and divergent hardware backends. We propose a
flow that performs design space exploration to generate a customized hardware
architecture and software operator library that can be leveraged by mainstream
learning frameworks. We demonstrate our approach by deploying optimized deep
learning models used for object classification and style transfer on edge-class
FPGAs. |
---|---|
DOI: | 10.48550/arxiv.1807.04188 |