AILAB-Udine@SMM4H 22: Limits of Transformers and BERT Ensembles

This paper describes the models developed by the AILAB-Udine team for the SMM4H 22 Shared Task. We explored the limits of Transformer based models on text classification, entity extraction and entity normalization, tackling Tasks 1, 2, 5, 6 and 10. The main take-aways we got from participating in di...

Full description

Saved in:
Bibliographic Details
Main Authors: Portelli, Beatrice, Scaboro, Simone, Chersoni, Emmanuele, Santus, Enrico, Serra, Giuseppe
Format: Journal Article
Language:English
Published: 07-09-2022
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper describes the models developed by the AILAB-Udine team for the SMM4H 22 Shared Task. We explored the limits of Transformer based models on text classification, entity extraction and entity normalization, tackling Tasks 1, 2, 5, 6 and 10. The main take-aways we got from participating in different tasks are: the overwhelming positive effects of combining different architectures when using ensemble learning, and the great potential of generative models for term normalization.
DOI:10.48550/arxiv.2209.03452