Optimized Inference for 1.58-bit LLMs: A Time and Memory-Efficient Algorithm for Binary and Ternary Matrix Multiplication
Despite their tremendous success and versatility, Large Language Models (LLMs) suffer from inference inefficiency while relying on advanced computational infrastructure. To address these challenges and make LLMs more accessible and cost-effective, in this paper, we propose algorithms to improve the...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
09-11-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Despite their tremendous success and versatility, Large Language Models
(LLMs) suffer from inference inefficiency while relying on advanced
computational infrastructure. To address these challenges and make LLMs more
accessible and cost-effective, in this paper, we propose algorithms to improve
the inference time and memory efficiency of 1.58-bit LLMs with ternary weight
matrices. Particularly focusing on matrix multiplication as the bottle-neck
operation of inference, we observe that, once trained, the weight matrices of a
model no longer change. This allows us to preprocess these matrices and create
indices that help reduce the storage requirements by a logarithmic factor while
enabling our efficient inference algorithms. Specifically, for a $n$ by $n$
weight matrix, our efficient algorithm guarantees a time complexity of
$O(\frac{n^2}{\log n})$, a logarithmic factor improvement over the standard
$O(n^2)$ vector-matrix multiplication. Besides theoretical analysis, we conduct
extensive experiments to evaluate the practical efficiency of our algorithms.
Our results confirm the superiority of the approach both with respect to time
and memory, as we observed a reduction in inference time up to 29x and memory
usage up to 6x. |
---|---|
DOI: | 10.48550/arxiv.2411.06360 |