Pretext training followed by task-specific fine-tuning has been a successful approach in vision and language domains. This paper proposes a self-supervised pretext training framework tailored to event sequence data. We introduce novel auxiliary tasks (pretext tasks) that encourage the network to learn the coupling relationships between event times and types — a previously untapped source of self-supervision without labels. These pretext tasks unlock foundational representations that are generalizable across different downstream tasks, including next-event prediction for temporal point process models, event sequence classification, and missing event interpolation. Experiments on popular public benchmarks demonstrate the potential of the proposed method across different tasks and data domains.
Bibtex
@inproceedings{
wang2024selfsupervised,
title={Self-Supervised Pretext Tasks for Event Sequence Data from Detecting Misalignment},
author={Yimu Wang and He Zhao and Ruizhi Deng and Frederick Tung and Greg Mori},
booktitle={NeurIPS 2024 Workshop: Self-Supervised Learning – Theory and Practice},
year={2024},
url={https://openreview.net/forum?id=zc101QzXtw}
}
Related Research
-
Unsupervised Event Outlier Detection in Continuous Time
Unsupervised Event Outlier Detection in Continuous Time
S. Nath, K. Y. C. Lui, and S. Liu. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications
-
LLM-TS Integrator: Integrating LLM for Enhanced Time Series Modeling
LLM-TS Integrator: Integrating LLM for Enhanced Time Series Modeling
C. Chen, G. Oliveira, H. Sharifi, and T. Sylvain. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications
-
Inference, Fast and Slow: Reinterpreting VAEs for OOD Detection
Inference, Fast and Slow: Reinterpreting VAEs for OOD Detection
S. Huang, J. He, and K. Y. C. Lui. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications