TERA: Self-Supervised Learning of Transformer Encoder Representation for Speech
IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, Vol. 29, 2021 We introduce a self-supervised speech pre-training method called TERA, which stands for Transformer Encoder Representations from Alteration. Recent approaches often learn by using a single auxiliary task like contrastive...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
04-08-2021
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING,
Vol. 29, 2021 We introduce a self-supervised speech pre-training method called TERA, which
stands for Transformer Encoder Representations from Alteration. Recent
approaches often learn by using a single auxiliary task like contrastive
prediction, autoregressive prediction, or masked reconstruction. Unlike
previous methods, we use alteration along three orthogonal axes to pre-train
Transformer Encoders on a large amount of unlabeled speech. The model learns
through the reconstruction of acoustic frames from their altered counterpart,
where we use a stochastic policy to alter along various dimensions: time,
frequency, and magnitude. TERA can be used for speech representations
extraction or fine-tuning with downstream models. We evaluate TERA on several
downstream tasks, including phoneme classification, keyword spotting, speaker
recognition, and speech recognition. We present a large-scale comparison of
various self-supervised models. TERA achieves strong performance in the
comparison by improving upon surface features and outperforming previous
models. In our experiments, we study the effect of applying different
alteration techniques, pre-training on more data, and pre-training on various
features. We analyze different model sizes and find that smaller models are
strong representation learners than larger models, while larger models are more
effective for downstream fine-tuning than smaller models. Furthermore, we show
the proposed method is transferable to downstream datasets not used in
pre-training. |
---|---|
DOI: | 10.48550/arxiv.2007.06028 |