Unsupervised Data Augmentation with Naive Augmentation and without Unlabeled Data
Unsupervised Data Augmentation (UDA) is a semi-supervised technique that applies a consistency loss to penalize differences between a model's predictions on (a) observed (unlabeled) examples; and (b) corresponding 'noised' examples produced via data augmentation. While UDA has gained...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
22-10-2020
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Unsupervised Data Augmentation (UDA) is a semi-supervised technique that
applies a consistency loss to penalize differences between a model's
predictions on (a) observed (unlabeled) examples; and (b) corresponding
'noised' examples produced via data augmentation. While UDA has gained
popularity for text classification, open questions linger over which design
decisions are necessary and over how to extend the method to sequence labeling
tasks. This method has recently gained traction for text classification. In
this paper, we re-examine UDA and demonstrate its efficacy on several
sequential tasks. Our main contribution is an empirical study of UDA to
establish which components of the algorithm confer benefits in NLP. Notably,
although prior work has emphasized the use of clever augmentation techniques
including back-translation, we find that enforcing consistency between
predictions assigned to observed and randomly substituted words often yields
comparable (or greater) benefits compared to these complex perturbation models.
Furthermore, we find that applying its consistency loss affords meaningful
gains without any unlabeled data at all, i.e., in a standard supervised
setting. In short: UDA need not be unsupervised, and does not require complex
data augmentation to be effective. |
---|---|
DOI: | 10.48550/arxiv.2010.11966 |