An Automatic-Addressing Architecture With Fully Serialized Access in Racetrack Memory for Energy-Efficient CNNs
Racetrack memory, an emerging low-power magnetic memory, promises a competitive replacement for traditional memory in the accelerators. However, random access in racetrack memory is time and energy expenditure for CNN accelerators because of its large amount of invalid-shifts. In this article, we pr...
Saved in:
Published in: | IEEE transactions on computers Vol. 71; no. 1; pp. 235 - 250 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
New York
IEEE
01-01-2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Racetrack memory, an emerging low-power magnetic memory, promises a competitive replacement for traditional memory in the accelerators. However, random access in racetrack memory is time and energy expenditure for CNN accelerators because of its large amount of invalid-shifts. In this article, we propose an automatic-addressing architecture that builds a novel data layout to guarantee that the next round of memory access can be always satisfied at the in-situ or rigorously adjacent cells of current round, producing a fully serialized access footprint that can drive instant port-alignment without any invalid-shifts in racetrack memory. By this way, original address-based access degrades to the selections repeated among the three candidates, i.e., one in-situ cell and two neighbor cells. Based on this simplification, a lightweight access management can generate the sequence of one-out-three selections according to the deterministic access behaviors defined by CNN hyper-parameters. The evaluation shows that, when deploying the five popular CNN applications to our architecture, the physical shifts of racetrack is curtailed by 74.64 percent over legacy layout, which achieves 54.2 and 42.1 percent energy reduction on read and write, respectively. A case study of YOLOv2 indicates that our architecture performs 6.503 GOp/J that achieves <inline-formula><tex-math notation="LaTeX">18.5 \times</tex-math> <mml:math><mml:mrow><mml:mn>18</mml:mn><mml:mo>.</mml:mo><mml:mn>5</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="wang-ieq1-3045433.gif"/> </inline-formula> improvement to server-level GPUs. |
---|---|
ISSN: | 0018-9340 1557-9956 |
DOI: | 10.1109/TC.2020.3045433 |