Modern foundation model architectures rely on attention mechanisms to effectively capture context. However, these methods require linear or quadratic memory in terms of the number of inputs/data points, limiting their applicability in low-compute domains. In this work, we propose Constant Memory Attention Block (CMAB), a novel general-purpose attention block that computes its output in constant memory and performs updates in constant computation. Highlighting CMABs efficacy, we introduce methods for Neural Processes and Temporal Point Processes. Empirically, we show our proposed methods achieve results competitive with state-of-the-art while being significantly more memory efficient.
Bibtex
@misc{feng2023constant,
title={Constant Memory Attention Block},
author={Leo Feng and Frederick Tung and Hossein Hajimirsadeghi and Yoshua Bengio and Mohamed Osama Ahmed},
year={2023},
eprint={2306.12599},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Related Research
-
Ranking Regularization for Critical Rare Classes: Minimizing False Positives at a High True Positive Rate
Ranking Regularization for Critical Rare Classes: Minimizing False Positives at a High True Positive Rate
*M. Kiarash, H. Zhao, M. Zhai, and F. Tung. The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR)
Publications
-
Meta Temporal Point Processes
Meta Temporal Point Processes
W. Bae, M. O. Ahmed, F. Tung, and G. Oliveira. International Conference on Learning Representations (ICLR)
Publications
-
Self-Supervised Time Series Representation Learning with Temporal-Instance Similarity Distillation
Self-Supervised Time Series Representation Learning with Temporal-Instance Similarity Distillation
A. Hajimoradlou, L. Pishdad, F. Tung, and M. Karpusha. Workshop at International Conference on Machine Learning (ICML)
Publications