Multi-task self-supervised time-series representation learning
Time-series representation learning is crucial for extracting meaningful representations from time-series data with temporal dynamics and sparse labels. Contrastive learning, a powerful technique for exploiting the inherent data patterns, has been applied to explore the diverse consistencies in time...
Saved in:
Published in: | Information sciences Vol. 671; p. 120654 |
---|---|
Main Authors: | , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier Inc
01-06-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Time-series representation learning is crucial for extracting meaningful representations from time-series data with temporal dynamics and sparse labels. Contrastive learning, a powerful technique for exploiting the inherent data patterns, has been applied to explore the diverse consistencies in time-series data, achieved through careful selection of contrastive pairs and design of appropriate loss. Encouraging such consistency is essential for acquiring comprehensive representations of time-series data. In this paper, we propose a new framework for time-series representation learning that combines the advantages of contextual, temporal, and transformation consistencies. This framework enables the network to learn general representations suitable for different tasks and domains. First, positive and negative pairs are generated to establish a multi-task learning setup. Then, contrastive losses are formulated to explore contextual, temporal, and transformation consistencies, which are jointly optimized to learn general time-series representations. In addition, we investigate an uncertainty weighting approach to enhance the effectiveness of multi-task learning. To evaluate the performance of our framework, we conduct experiments on three downstream tasks: time-series classification, forecasting, and anomaly detection. The experimental results demonstrate the superior performance of our framework compared to benchmark models across different tasks. Furthermore, our framework shows efficiency in cross-domain transfer learning scenarios.
•A new multi-task self-supervised time-series representation learning framework is proposed.•Our method efficiently intergrates contrastive learning approaches for contextual, temporal, and transformation consistency.•The proposed method can learn general representations for time-series classification, forecasting, and anomaly detection. |
---|---|
ISSN: | 0020-0255 1872-6291 |
DOI: | 10.1016/j.ins.2024.120654 |