In this work we describe OMEN, a neural ODE based normalizing flow for the prediction of marginal distributions at flexible evaluation horizons, and apply it to agent position forecasting. OMEN’s architecture embeds an assumption that marginal distributions of a given agent moving forward in time are related, allowing for an efficient representation of marginal distributions through time and allowing for reliable interpolation between prediction horizons seen in training. Experiments on a popular agent forecasting dataset demonstrate significant improvements over most baseline approaches, and comparable performance to the state of the art while providing the new functionality of reliable interpolation of predicted marginal distributions between prediction horizons as demonstrated with synthetic data.
Bibtex
@inproceedings{
radovic2021agent,
title={Agent Forecasting at Flexible Horizons using {ODE} Flows},
author={Alexander Radovic and Jiawei He and Janahan Ramanan and Marcus A Brubaker and Andreas Lehrmann},
booktitle={ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models},
year={2021},
url={https://openreview.net/forum?id=MvjsWTCfXpA}
}
Related Research
-
What Constitutes Good Contrastive Learning in Time-Series Forecasting?
What Constitutes Good Contrastive Learning in Time-Series Forecasting?
C. Zhang, Q. Yan, L. Meng, and T. Sylvain.
Research
-
RBC Borealis at International Conference on Learning Representations (ICLR): Machine Learning for a better financial future
RBC Borealis at International Conference on Learning Representations (ICLR): Machine Learning for a better financial future
Learning And Generalization; Natural Language Processing; Time series Modelling
Research
-
Self-Supervised Time Series Representation Learning with Temporal-Instance Similarity Distillation
Self-Supervised Time Series Representation Learning with Temporal-Instance Similarity Distillation
A. Hajimoradlou, L. Pishdad, F. Tung, and M. Karpusha. Workshop at International Conference on Machine Learning (ICML)
Publications