In recent years, the introduction of self-supervised contrastive learning (SSCL) has demonstrated remarkable improvements in representation learning across various domains, including natural language processing and computer vision. By leveraging the inherent benefits of self-supervision, SSCL enables the pre-training of representation models using vast amounts of unlabeled data. Despite these advances, there remains a significant gap in understanding the impact of different SSCL strategies on time series forecasting performance, as well as the specific benefits that SSCL can bring. This paper aims to address these gaps by conducting a comprehensive analysis of the effectiveness of various training variables, including different SSCL algorithms, learning strategies, model architectures, and their interplay. Additionally, to gain deeper insights into the improvements brought about by SSCL in the context of time-series forecasting, a qualitative analysis of the empirical receptive field is performed. Through our experiments, we demonstrate that the end-to-end training of a Transformer model using the Mean Squared Error (MSE) loss and SSCL emerges as the most effective approach in time series forecasting. Notably, the incorporation of the contrastive objective enables the model to prioritize more pertinent information for forecasting, such as scale and periodic relationships. These findings contribute to a better understanding of the benefits of SSCL in time series forecasting and provide valuable insights for future research in this area.
Bibtex
@misc{zhang2023constitutes,
title={What Constitutes Good Contrastive Learning in Time-Series Forecasting?},
author={Chiyu Zhang and Qi Yan and Lili Meng and Tristan Sylvain},
year={2023},
eprint={2306.12086},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Related Research
-
Unsupervised Event Outlier Detection in Continuous Time
Unsupervised Event Outlier Detection in Continuous Time
S. Nath, K. Y. C. Lui, and S. Liu. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications
-
LLM-TS Integrator: Integrating LLM for Enhanced Time Series Modeling
LLM-TS Integrator: Integrating LLM for Enhanced Time Series Modeling
C. Chen, G. Oliveira, H. Sharifi, and T. Sylvain. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications
-
Inference, Fast and Slow: Reinterpreting VAEs for OOD Detection
Inference, Fast and Slow: Reinterpreting VAEs for OOD Detection
S. Huang, J. He, and K. Y. C. Lui. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications