Neural Processes (NPs) are popular methods in meta-learning that can estimate predictive uncertainty on target datapoints by conditioning on a context dataset. Previous state-of-the-art method Transformer Neural Processes (TNPs) achieve strong performance but require quadratic computation with respect to the number of context datapoints, significantly limiting its scalability. Conversely, existing sub-quadratic NP variants perform significantly worse than that of TNPs. Tackling this issue, we propose Latent Bottlenecked Attentive Neural Processes (LBANPs), a new computationally efficient sub-quadratic NP variant, that has a querying computational complexity independent of the number of context datapoints. The model encodes the context dataset into a constant number of latent vectors on which self-attention is performed. When making predictions, the model retrieves higher-order information from the context dataset via multiple cross-attention mechanisms on the latent vectors. We empirically show that LBANPs achieve results competitive with the state-of-the-art on meta-regression, image completion, and contextual multi-armed bandits. We demonstrate that LBANPs can trade-off the computational cost and performance according to the number of latent vectors. Finally, we show LBANPs can scale beyond existing attention-based NP variants to larger dataset settings.
Bibtex
@misc{https://doi.org/10.48550/arxiv.2211.08458,
doi = {10.48550/ARXIV.2211.08458},
url = {https://arxiv.org/abs/2211.08458},
author = {Feng, Leo and Hajimirsadeghi, Hossein and Bengio, Yoshua and Ahmed, Mohamed Osama},
keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Latent Bottlenecked Attentive Neural Processes},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
Related Research
-
Scaleformer: Iterative Multi-scale Refining Transformers for Time Series Forecasting
Scaleformer: Iterative Multi-scale Refining Transformers for Time Series Forecasting
M. Amin Shabani, A. Abdi, L. Meng, and T. Sylvain. International Conference on Learning Representations (ICLR)
Publications
-
Efficient Queries Transformer Neural Processes
Efficient Queries Transformer Neural Processes
L. Feng, H. Hajimirsadeghi, Y. Bengio, and M. O. Ahmed. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications