Systematically Analyzing Prompt Injection Vulnerabilities in Diverse LLM Architectures

This study systematically analyzes the vulnerability of 36 large language models (LLMs) to various prompt injection attacks, a technique that leverages carefully crafted prompts to elicit malicious LLM behavior. Across 144 prompt injection tests, we observed a strong correlation between model parame...

Full description

Saved in:
Bibliographic Details
Main Authors: Benjamin, Victoria, Braca, Emily, Carter, Israel, Kanchwala, Hafsa, Khojasteh, Nava, Landow, Charly, Luo, Yi, Ma, Caroline, Magarelli, Anna, Mirin, Rachel, Moyer, Avery, Simpson, Kayla, Skawinski, Amelia, Heverin, Thomas
Format: Journal Article
Language:English
Published: 28-10-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract This study systematically analyzes the vulnerability of 36 large language models (LLMs) to various prompt injection attacks, a technique that leverages carefully crafted prompts to elicit malicious LLM behavior. Across 144 prompt injection tests, we observed a strong correlation between model parameters and vulnerability, with statistical analyses, such as logistic regression and random forest feature analysis, indicating that parameter size and architecture significantly influence susceptibility. Results revealed that 56 percent of tests led to successful prompt injections, emphasizing widespread vulnerability across various parameter sizes, with clustering analysis identifying distinct vulnerability profiles associated with specific model configurations. Additionally, our analysis uncovered correlations between certain prompt injection techniques, suggesting potential overlaps in vulnerabilities. These findings underscore the urgent need for robust, multi-layered defenses in LLMs deployed across critical infrastructure and sensitive industries. Successful prompt injection attacks could result in severe consequences, including data breaches, unauthorized access, or misinformation. Future research should explore multilingual and multi-step defenses alongside adaptive mitigation strategies to strengthen LLM security in diverse, real-world environments.
AbstractList This study systematically analyzes the vulnerability of 36 large language models (LLMs) to various prompt injection attacks, a technique that leverages carefully crafted prompts to elicit malicious LLM behavior. Across 144 prompt injection tests, we observed a strong correlation between model parameters and vulnerability, with statistical analyses, such as logistic regression and random forest feature analysis, indicating that parameter size and architecture significantly influence susceptibility. Results revealed that 56 percent of tests led to successful prompt injections, emphasizing widespread vulnerability across various parameter sizes, with clustering analysis identifying distinct vulnerability profiles associated with specific model configurations. Additionally, our analysis uncovered correlations between certain prompt injection techniques, suggesting potential overlaps in vulnerabilities. These findings underscore the urgent need for robust, multi-layered defenses in LLMs deployed across critical infrastructure and sensitive industries. Successful prompt injection attacks could result in severe consequences, including data breaches, unauthorized access, or misinformation. Future research should explore multilingual and multi-step defenses alongside adaptive mitigation strategies to strengthen LLM security in diverse, real-world environments.
Author Braca, Emily
Benjamin, Victoria
Kanchwala, Hafsa
Landow, Charly
Ma, Caroline
Skawinski, Amelia
Moyer, Avery
Heverin, Thomas
Simpson, Kayla
Carter, Israel
Magarelli, Anna
Mirin, Rachel
Khojasteh, Nava
Luo, Yi
Author_xml – sequence: 1
  givenname: Victoria
  surname: Benjamin
  fullname: Benjamin, Victoria
– sequence: 2
  givenname: Emily
  surname: Braca
  fullname: Braca, Emily
– sequence: 3
  givenname: Israel
  surname: Carter
  fullname: Carter, Israel
– sequence: 4
  givenname: Hafsa
  surname: Kanchwala
  fullname: Kanchwala, Hafsa
– sequence: 5
  givenname: Nava
  surname: Khojasteh
  fullname: Khojasteh, Nava
– sequence: 6
  givenname: Charly
  surname: Landow
  fullname: Landow, Charly
– sequence: 7
  givenname: Yi
  surname: Luo
  fullname: Luo, Yi
– sequence: 8
  givenname: Caroline
  surname: Ma
  fullname: Ma, Caroline
– sequence: 9
  givenname: Anna
  surname: Magarelli
  fullname: Magarelli, Anna
– sequence: 10
  givenname: Rachel
  surname: Mirin
  fullname: Mirin, Rachel
– sequence: 11
  givenname: Avery
  surname: Moyer
  fullname: Moyer, Avery
– sequence: 12
  givenname: Kayla
  surname: Simpson
  fullname: Simpson, Kayla
– sequence: 13
  givenname: Amelia
  surname: Skawinski
  fullname: Skawinski, Amelia
– sequence: 14
  givenname: Thomas
  surname: Heverin
  fullname: Heverin, Thomas
BackLink https://doi.org/10.48550/arXiv.2410.23308$$DView paper in arXiv
BookMark eNqFzrsKwkAQheEttPD2AFbOCxhjLpA2eEEhgqCkDWsYdWSzCbObYHx6o9hbHTj8xTcUPV1qFGK6dJ0gCkN3IflJjeMF3eH5vhsNRHpqjcVCWsqlUi3EWqr2RfoGRy6LysJePzC3VGpIa6WR5YUUWUIDpGFNDbJBSJIDxJzfyXZtzWjGon-VyuDktyMx227Oq938K8gqpkJym30k2Vfi_y_ecT9BTQ
ContentType Journal Article
Copyright http://creativecommons.org/licenses/by-nc-nd/4.0
Copyright_xml – notice: http://creativecommons.org/licenses/by-nc-nd/4.0
DBID AKY
GOX
DOI 10.48550/arxiv.2410.23308
DatabaseName arXiv Computer Science
arXiv.org
DatabaseTitleList
Database_xml – sequence: 1
  dbid: GOX
  name: arXiv.org
  url: http://arxiv.org/find
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
ExternalDocumentID 2410_23308
GroupedDBID AKY
GOX
ID FETCH-arxiv_primary_2410_233083
IEDL.DBID GOX
IngestDate Sat Nov 02 12:35:35 EDT 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-arxiv_primary_2410_233083
OpenAccessLink https://arxiv.org/abs/2410.23308
ParticipantIDs arxiv_primary_2410_23308
PublicationCentury 2000
PublicationDate 2024-10-28
PublicationDateYYYYMMDD 2024-10-28
PublicationDate_xml – month: 10
  year: 2024
  text: 2024-10-28
  day: 28
PublicationDecade 2020
PublicationYear 2024
Score 3.8805833
SecondaryResourceType preprint
Snippet This study systematically analyzes the vulnerability of 36 large language models (LLMs) to various prompt injection attacks, a technique that leverages...
SourceID arxiv
SourceType Open Access Repository
SubjectTerms Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Cryptography and Security
Computer Science - Learning
Title Systematically Analyzing Prompt Injection Vulnerabilities in Diverse LLM Architectures
URI https://arxiv.org/abs/2410.23308
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdVxLSwMxEB5sT17EolIf1Tl4DbbZrJsei22tUB9QKb0tmd0UKssiu11Rf73JbNVeek1CGPKYb5L55gO4Jhv1UxkmQhvqCpXIUJDuK6Gl0TLSSwo5YzqZRU8LPRx5mRz8rYUxxefqo9YHpvLGwYunKge-mrchpads3T8v6uQkS3Ftxv-PczEmN22BxPgQDjbRHQ7q7WjBns2PYD77U0s2WfaFrAPy7SADXwp3G9f4kL8xISrHeZV5FWgmrLonLK5yHDJxwuJ0-oiDrV__8hiuxqPXu4lgS-L3WjYi9kbGbGRwAk33uLdtQHf8KXDeSlFiFS27pChVibk1ptcjhySn0N41y9nurnPYlw58vY-V-gKa66KyHWiUaXXJK_gD8Ih1JQ
link.rule.ids 228,230,782,887
linkProvider Cornell University
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Systematically+Analyzing+Prompt+Injection+Vulnerabilities+in+Diverse+LLM+Architectures&rft.au=Benjamin%2C+Victoria&rft.au=Braca%2C+Emily&rft.au=Carter%2C+Israel&rft.au=Kanchwala%2C+Hafsa&rft.date=2024-10-28&rft_id=info:doi/10.48550%2Farxiv.2410.23308&rft.externalDocID=2410_23308