Properties and Challenges of LLM-Generated Explanations
The self-rationalising capabilities of large language models (LLMs) have been explored in restricted settings, using task/specific data sets. However, current LLMs do not (only) rely on specifically annotated data; nonetheless, they frequently explain their outputs. The properties of the generated e...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
16-02-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The self-rationalising capabilities of large language models (LLMs) have been
explored in restricted settings, using task/specific data sets. However,
current LLMs do not (only) rely on specifically annotated data; nonetheless,
they frequently explain their outputs. The properties of the generated
explanations are influenced by the pre-training corpus and by the target data
used for instruction fine-tuning. As the pre-training corpus includes a large
amount of human-written explanations "in the wild", we hypothesise that LLMs
adopt common properties of human explanations. By analysing the outputs for a
multi-domain instruction fine-tuning data set, we find that generated
explanations show selectivity and contain illustrative elements, but less
frequently are subjective or misleading. We discuss reasons and consequences of
the properties' presence or absence. In particular, we outline positive and
negative implications depending on the goals and user groups of the
self-rationalising system. |
---|---|
DOI: | 10.48550/arxiv.2402.10532 |