SLIP: Securing LLMs IP Using Weights Decomposition
Large language models (LLMs) have recently seen widespread adoption, in both academia and industry. As these models grow, they become valuable intellectual property (IP), reflecting enormous investments by their owners. Moreover, the high cost of cloud-based deployment has driven interest towards de...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
15-07-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Large language models (LLMs) have recently seen widespread adoption, in both
academia and industry. As these models grow, they become valuable intellectual
property (IP), reflecting enormous investments by their owners. Moreover, the
high cost of cloud-based deployment has driven interest towards deployment to
edge devices, yet this risks exposing valuable parameters to theft and
unauthorized use. Current methods to protect models' IP on the edge have
limitations in terms of practicality, loss in accuracy, or suitability to
requirements. In this paper, we introduce a novel hybrid inference algorithm,
named SLIP, designed to protect edge-deployed models from theft. SLIP is the
first hybrid protocol that is both practical for real-world applications and
provably secure, while having zero accuracy degradation and minimal impact on
latency. It involves partitioning the model between two computing resources,
one secure but expensive, and another cost-effective but vulnerable. This is
achieved through matrix decomposition, ensuring that the secure resource
retains a maximally sensitive portion of the model's IP while performing a
minimal amount of computations, and vice versa for the vulnerable resource.
Importantly, the protocol includes security guarantees that prevent attackers
from exploiting the partition to infer the secured information. Finally, we
present experimental results that show the robustness and effectiveness of our
method, positioning it as a compelling solution for protecting LLMs. |
---|---|
DOI: | 10.48550/arxiv.2407.10886 |