We present substantial evidence demonstrating the benefits of integrating Large Language Models (LLMs) with a Contextual Multi-Armed Bandit framework. Contextual bandits have been widely used in recommendation systems to generate personalized suggestions based on user-specific contexts. We show that LLMs, pre-trained on extensive corpora rich in human knowledge and preferences, can simulate human behaviours well enough to jump-start contextual multi-armed bandits to reduce online learning regret. We propose an initialization algorithm for contextual bandits by prompting LLMs to produce a pre-training dataset of approximate human preferences for the bandit. This significantly reduces online learning regret and data-gathering costs for training such models. Our approach is validated empirically through two sets of experiments with different bandit setups: one which utilizes LLMs to serve as an oracle and a real-world experiment utilizing data from a conjoint survey experiment.
Bibtex
@misc{alamdari2024jumpstartingbanditsllmgenerated,
title={Jump Starting Bandits with LLM-Generated Prior Knowledge},
author={Parand A. Alamdari and Yanshuai Cao and Kevin H. Wilson},
year={2024},
eprint={2406.19317},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2406.19317},
}
Related Research
-
Training foundation models up to 10x more efficiently with Memory-Mapped Datasets
Training foundation models up to 10x more efficiently with Memory-Mapped Datasets
T. Badamdorj, and M. Anand.
Research
-
DeepRRTime: Robust Time-series Forecasting with a Regularized INR Basis
DeepRRTime: Robust Time-series Forecasting with a Regularized INR Basis
C.S. Sastry, M. Gilany, K. Y. C. Lui, M. Magill, and A. Pashevich. Transactions on Machine Learning Research (TMLR)
Publications
-
Radar: Fast Long-Context Decoding for Any Transformer
Radar: Fast Long-Context Decoding for Any Transformer
Y. Hao, M. Zhai, H. Hajimirsadeghi, S. Hosseini, and F. Tung. International Conference on Learning Representations (ICLR)
Publications