A Stage-wise Conversion Strategy for Low-Latency Deformable Spiking CNN

Spiking neural networks (SNNs) are currently one of the most successful approaches to model the behavior and learning potential of the brain. Recently, they have obtained marvelous research interest thanks to their event-driven and energy-efficient characteristics. While difficult to directly train...

Full description

Saved in:
Bibliographic Details
Published in:2021 IEEE Workshop on Signal Processing Systems (SiPS) pp. 1 - 6
Main Authors: Wang, Chunyu, Luo, Jiapeng, Wang, Zhongfeng
Format: Conference Proceeding
Language:English
Published: IEEE 01-10-2021
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Spiking neural networks (SNNs) are currently one of the most successful approaches to model the behavior and learning potential of the brain. Recently, they have obtained marvelous research interest thanks to their event-driven and energy-efficient characteristics. While difficult to directly train SNNs from scratch because of their non-differentiable spike operations, many works have focused on converting a trained DNN to the target SNN. However, there is no efficient method to convert the deformable convolutional layer which is frequently used in many applications. The deformable convolution layer enables deformation of the convolutional sampling grid by adding offsets to the regular sampling locations, which enhances the geometric transformation modeling capability of CNNs. In this work, we propose a novel deformable spiking CNN, which can successfully convert DNNs with deformable convolution layers to SNNs with much shorter simulation time and have low latency during inference while maintaining high accuracy. To be specific, we design an effective method dedicated for deformable convolution layers to be converted. By treating the offset prediction module as an embedded SNN, we calculate the spiking offsets multi times and use the average values as the final offsets for deformable convolution. We also propose a stage-wise DNN-SNN conversion strategy to further reduce the conversion error. We divide the network into several stages and convert each stage sequentially with retraining to diminish the difference between the source DNN and the target SNN as much as possible. The experiments on CIFAR-10 and CIFAR-100 datasets show that our method surpasses the state-of-the-art works both in conversion accuracy and inference latency.
ISSN:2374-7390
DOI:10.1109/SiPS52927.2021.00009