Evaluating statistical language models as pragmatic reasoners
The relationship between communicated language and intended meaning is often probabilistic and sensitive to context. Numerous strategies attempt to estimate such a mapping, often leveraging recursive Bayesian models of communication. In parallel, large language models (LLMs) have been increasingly a...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
01-05-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The relationship between communicated language and intended meaning is often
probabilistic and sensitive to context. Numerous strategies attempt to estimate
such a mapping, often leveraging recursive Bayesian models of communication. In
parallel, large language models (LLMs) have been increasingly applied to
semantic parsing applications, tasked with inferring logical representations
from natural language. While existing LLM explorations have been largely
restricted to literal language use, in this work, we evaluate the capacity of
LLMs to infer the meanings of pragmatic utterances. Specifically, we explore
the case of threshold estimation on the gradable adjective ``strong'',
contextually conditioned on a strength prior, then extended to composition with
qualification, negation, polarity inversion, and class comparison. We find that
LLMs can derive context-grounded, human-like distributions over the
interpretations of several complex pragmatic utterances, yet struggle composing
with negation. These results inform the inferential capacity of statistical
language models, and their use in pragmatic and semantic parsing applications.
All corresponding code is made publicly available
(https://github.com/benlipkin/probsem/tree/CogSci2023). |
---|---|
DOI: | 10.48550/arxiv.2305.01020 |