ConQ: Binary Quantization of Neural Networks via Concave Regularization

The increasing demand for deep neural networks (DNNs) in resource-constrained systems propels the interest in heavily quantized architectures such as networks with binarized weights. However, despite huge progress in the field, the gap with full-precision performance is far from closed. Today's...

Full description

Saved in:
Bibliographic Details
Published in:2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP) pp. 1 - 6
Main Authors: Migliorati, Andrea, Fracastoro, Giulia, Fosson, Sophie, Bianchi, Tiziano, Magli, Enrico
Format: Conference Proceeding
Language:English
Published: IEEE 22-09-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The increasing demand for deep neural networks (DNNs) in resource-constrained systems propels the interest in heavily quantized architectures such as networks with binarized weights. However, despite huge progress in the field, the gap with full-precision performance is far from closed. Today's most effective methods for quantization are rooted in proximal gradient descent theory. In this work, we propose ConQ, a novel concave regularization approach to train effective DNNs with binarized weights. Motivated by theoretical investigation, we argue that the proposed concave regularizer, which allows the removal of the singularity point at 0, presents a more effective shape than previously considered models in terms of accuracy and convergence rate. We present a theoretical convergence analysis of ConQ, with specific i nsights o n b oth c onvex a nd n on-convex s ettings. An extensive experimental evaluation shows that ConQ outperforms the accuracy of competing regularization methods for networks with binarized weights.
ISSN:2161-0371
DOI:10.1109/MLSP58920.2024.10734837