Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning
Recent work has shown that NLP tasks such as Relation Extraction (RE) can be recasted as Textual Entailment tasks using verbalizations, with strong performance in zero-shot and few-shot settings thanks to pre-trained entailment models. The fact that relations in current RE datasets are easily verbal...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
03-05-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recent work has shown that NLP tasks such as Relation Extraction (RE) can be
recasted as Textual Entailment tasks using verbalizations, with strong
performance in zero-shot and few-shot settings thanks to pre-trained entailment
models. The fact that relations in current RE datasets are easily verbalized
casts doubts on whether entailment would be effective in more complex tasks. In
this work we show that entailment is also effective in Event Argument
Extraction (EAE), reducing the need of manual annotation to 50% and 20% in ACE
and WikiEvents respectively, while achieving the same performance as with full
training. More importantly, we show that recasting EAE as entailment alleviates
the dependency on schemas, which has been a road-block for transferring
annotations between domains. Thanks to the entailment, the multi-source
transfer between ACE and WikiEvents further reduces annotation down to 10% and
5% (respectively) of the full training without transfer. Our analysis shows
that the key to good results is the use of several entailment datasets to
pre-train the entailment model. Similar to previous approaches, our method
requires a small amount of effort for manual verbalization: only less than 15
minutes per event argument type is needed, and comparable results can be
achieved with users with different level of expertise. |
---|---|
DOI: | 10.48550/arxiv.2205.01376 |