PR-CIM: a Variation-Aware Binary-Neural-Network Framework for Process-Resilient Computation-in-memory
Binary neural networks (BNNs) that use 1-bit weights and activations have garnered interest as extreme quantization provides low power dissipation. By implementing BNNs as computing-in-memory (CIM), which computes multiplication and accumulations on memory arrays in an analog fashion, namely analog...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
19-10-2021
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Binary neural networks (BNNs) that use 1-bit weights and activations have
garnered interest as extreme quantization provides low power dissipation. By
implementing BNNs as computing-in-memory (CIM), which computes multiplication
and accumulations on memory arrays in an analog fashion, namely analog CIM, we
can further improve the energy efficiency to process neural networks. However,
analog CIMs suffer from the potential problem that process variation degrades
the accuracy of BNNs. Our Monte-Carlo simulations show that in an SRAM-based
analog CIM of VGG-9, the classification accuracy of CIFAR-10 is degraded even
below 20% under process variations of 65nm CMOS. To overcome this problem, we
present a variation-aware BNN framework. The proposed framework is developed
for SRAM-based BNN CIMs since SRAM is most widely used as on-chip memory,
however easily extensible to BNN CIMs based on other memories. Our extensive
experimental results show that under process variation of 65nm CMOS, our
framework significantly improves the CIFAR-10 accuracies of SRAM-based BNN
CIMs, from 10% and 10.1% to 87.76% and 77.74% for VGG-9 and RESNET-18
respectively. |
---|---|
DOI: | 10.48550/arxiv.2110.09962 |