Paradigm Shift in Natural Language Processing

In the era of deep learning, modeling for most natural language processing (NLP) tasks has converged into several mainstream paradigms. For example, we usually adopt the sequence labeling paradigm to solve a bundle of tasks such as POS-tagging, named entity recognition (NER), and chunking, and adopt...

Full description

Saved in:
Bibliographic Details
Published in:International journal of automation and computing Vol. 19; no. 3; pp. 169 - 183
Main Authors: Tian-Xiang, Sun, Xiang-Yang, Liu, Xi-Peng, Qiu, Xuan-Jing, Huang
Format: Journal Article
Language:English
Published: Heidelberg Springer Nature B.V 01-06-2022
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the era of deep learning, modeling for most natural language processing (NLP) tasks has converged into several mainstream paradigms. For example, we usually adopt the sequence labeling paradigm to solve a bundle of tasks such as POS-tagging, named entity recognition (NER), and chunking, and adopt the classification paradigm to solve tasks like sentiment analysis. With the rapid progress of pre-trained language models, recent years have witnessed a rising trend of paradigm shift, which is solving one NLP task in a new paradigm by reformulating the task. The paradigm shift has achieved great success on many tasks and is becoming a promising way to improve model performance. Moreover, some of these paradigms have shown great potential to unify a large number of NLP tasks, making it possible to build a single model to handle diverse tasks. In this paper, we review such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.
ISSN:1476-8186
1751-8520
DOI:10.1007/s11633-022-1331-6