Ingest-And-Ground: Dispelling Hallucinations from Continually-Pretrained LLMs with RAG
This paper presents new methods that have the potential to improve privacy process efficiency with LLM and RAG. To reduce hallucination, we continually pre-train the base LLM model with a privacy-specific knowledge base and then augment it with a semantic RAG layer. Our evaluations demonstrate that...
Saved in:
Main Authors: | , , , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
30-09-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents new methods that have the potential to improve privacy
process efficiency with LLM and RAG. To reduce hallucination, we continually
pre-train the base LLM model with a privacy-specific knowledge base and then
augment it with a semantic RAG layer. Our evaluations demonstrate that this
approach enhances the model performance (as much as doubled metrics compared to
out-of-box LLM) in handling privacy-related queries, by grounding responses
with factual information which reduces inaccuracies. |
---|---|
DOI: | 10.48550/arxiv.2410.02825 |