Shapley Idioms: Analysing BERT Sentence Embeddings for General Idiom Token Identification

This article examines the basis of Natural Language Understanding of transformer based language models, such as BERT. It does this through a case study on idiom token classification. We use idiom token identification as a basis for our analysis because of the variety of information types that have p...

Full description

Saved in:
Bibliographic Details
Published in:Frontiers in artificial intelligence Vol. 5; p. 813967
Main Authors: Nedumpozhimana, Vasudevan, Klubička, Filip, Kelleher, John D
Format: Journal Article
Language:English
Published: Switzerland Frontiers Media S.A 14-03-2022
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This article examines the basis of Natural Language Understanding of transformer based language models, such as BERT. It does this through a case study on idiom token classification. We use idiom token identification as a basis for our analysis because of the variety of information types that have previously been explored in the literature for this task, including: topic, lexical, and syntactic features. This variety of relevant information types means that the task of idiom token identification enables us to explore the forms of linguistic information that a BERT language model captures and encodes in its representations. The core of this article presents three experiments. The first experiment analyzes the effectiveness of BERT sentence embeddings for creating a general idiom token identification model and the results indicate that the BERT sentence embeddings outperform Skip-Thought. In the second and third experiment we use the game theory concept of Shapley Values to rank the usefulness of individual idiomatic expressions for model training and use this ranking to analyse the type of information that the model finds useful. We find that a combination of idiom-intrinsic and topic-based properties contribute to an expression's usefulness in idiom token identification. Overall our results indicate that BERT efficiently encodes a variety of information from topic, through lexical and syntactic information. Based on these results we argue that notwithstanding recent criticisms of language model based semantics, the ability of BERT to efficiently encode a variety of linguistic information types does represent a significant step forward in natural language understanding.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Edited by: Yorick Wilks, Florida Institute for Human and Machine Cognition, United States
Reviewed by: Kenneth Ward Church, Baidu, United States; Parisa Kordjamshidi, Michigan State University, United States
This article was submitted to Language and Computation, a section of the journal Frontiers in Artificial Intelligence
ISSN:2624-8212
2624-8212
DOI:10.3389/frai.2022.813967