Mask-guided BERT for few-shot text classification

Transformer-based language models have achieved significant success in various domains. However, the data-intensive nature of the transformer architecture requires much labeled data, which is challenging in low-resource scenarios (i.e., few-shot learning (FSL)). The main challenge of FSL is the diff...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) Vol. 610; p. 128576
Main Authors: Liao, Wenxiong, Liu, Zhengliang, Dai, Haixing, Wu, Zihao, Zhang, Yiyang, Huang, Xiaoke, Chen, Yuzhong, Jiang, Xi, Liu, David, Zhu, Dajiang, Li, Sheng, Liu, Wei, Liu, Tianming, Li, Quanzheng, Cai, Hongmin, Li, Xiang
Format: Journal Article
Language:English
Published: Elsevier B.V 28-12-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Transformer-based language models have achieved significant success in various domains. However, the data-intensive nature of the transformer architecture requires much labeled data, which is challenging in low-resource scenarios (i.e., few-shot learning (FSL)). The main challenge of FSL is the difficulty of training robust models on small amounts of samples, which frequently leads to overfitting. Here we present Mask-BERT, a simple and modular framework to help BERT-based architectures tackle FSL. The proposed approach fundamentally differs from existing FSL strategies such as prompt tuning and meta-learning. The core idea is to selectively apply masks on text inputs and filter out irrelevant information, which guides the model to focus on discriminative tokens that influence prediction results. In addition, to make the text representations from different categories more separable and the text representations from the same category more compact, we introduce a contrastive learning loss function. Experimental results on open-domain and medical-domain datasets demonstrate the effectiveness of Mask-BERT. Code and data are available at: github.com/WenxiongLiao/mask-bert •A simple and modular framework, ”Mask-BERT”, for few-shot text classification.•Mask-BERT itilizes selective masking of text inputs to filter out irrelevant information.•Mask-BERT guides the model towards relevant information to enhance few-shot learning.
ISSN:0925-2312
DOI:10.1016/j.neucom.2024.128576