Improving Factuality by Contrastive Decoding with Factual and Hallucination Prompts

Large language models have demonstrated impressive capabilities in many domains. But they sometimes generate irrelevant or nonsensical text, or produce outputs that deviate from the provided input, an occurrence commonly referred to as hallucination. To mitigate this issue, we introduce a novel deco...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Vol. 24; no. 21; p. 7097
Main Authors: Lv, Bojie, Feng, Ao, Xie, Chenlong
Format: Journal Article
Language:English
Published: Basel MDPI AG 01-11-2024
MDPI
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Large language models have demonstrated impressive capabilities in many domains. But they sometimes generate irrelevant or nonsensical text, or produce outputs that deviate from the provided input, an occurrence commonly referred to as hallucination. To mitigate this issue, we introduce a novel decoding method that incorporates both factual and hallucination prompts (DFHP). It applies contrastive decoding to highlight the disparity in output probabilities between factual prompts and hallucination prompts. Experiments on both multiple-choice and text generation tasks show that our approach significantly improves factual accuracy of large language models without additional training. On the TruthfulQA dataset, the DFHP method significantly improves factual accuracy of the LLaMA model, with an average improvement of 6.4% for the 7B, 13B, 30B, and 65B versions. Its high accuracy in factuality makes it an ideal choice for high reliability tasks like medical diagnosis and legal cases.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s24217097