Faster and Lighter LLMs: A Survey on Current Challenges and Way Forward
Despite the impressive performance of LLMs, their widespread adoption faces challenges due to substantial computational and memory requirements during inference. Recent advancements in model compression and system-level optimization methods aim to enhance LLM inference. This survey offers an overvie...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
02-02-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Despite the impressive performance of LLMs, their widespread adoption faces
challenges due to substantial computational and memory requirements during
inference. Recent advancements in model compression and system-level
optimization methods aim to enhance LLM inference. This survey offers an
overview of these methods, emphasizing recent developments. Through experiments
on LLaMA(/2)-7B, we evaluate various compression techniques, providing
practical insights for efficient LLM deployment in a unified setting. The
empirical analysis on LLaMA(/2)-7B highlights the effectiveness of these
methods. Drawing from survey insights, we identify current limitations and
discuss potential future directions to improve LLM inference efficiency. We
release the codebase to reproduce the results presented in this paper at
https://github.com/nyunAI/Faster-LLM-Survey |
---|---|
DOI: | 10.48550/arxiv.2402.01799 |