Privacy Risks of Speculative Decoding in Large Language Models
Speculative decoding in large language models (LLMs) accelerates token generation by speculatively predicting multiple tokens cheaply and verifying them in parallel, and has been widely deployed. In this paper, we provide the first study demonstrating the privacy risks of speculative decoding. We ob...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
01-11-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Speculative decoding in large language models (LLMs) accelerates token
generation by speculatively predicting multiple tokens cheaply and verifying
them in parallel, and has been widely deployed. In this paper, we provide the
first study demonstrating the privacy risks of speculative decoding. We observe
that input-dependent patterns of correct and incorrect predictions can be
leaked out to an adversary monitoring token generation times and packet sizes,
leading to privacy breaches. By observing the pattern of correctly and
incorrectly speculated tokens, we show that a malicious adversary can
fingerprint queries and learn private user inputs with more than $90\%$
accuracy across three different speculative decoding techniques - REST (almost
$100\%$ accuracy), LADE (up to $92\%$ accuracy), and BiLD (up to $95\%$
accuracy). We show that an adversary can also leak out confidential
intellectual property used to design these techniques, such as data from
data-stores used for prediction (in REST) at a rate of more than $25$ tokens
per second, or even hyper-parameters used for prediction (in LADE). We also
discuss mitigation strategies, such as aggregating tokens across multiple
iterations and padding packets with additional bytes, to avoid such privacy or
confidentiality breaches. |
---|---|
DOI: | 10.48550/arxiv.2411.01076 |