Reinforcement learning (RL) has had many successes when learning autonomously. This paper and accompanying talk consider how to make use of a non-technical human participant, when available. In particular, we consider the case where a human could 1) provide demonstrations of good behavior, 2) provide online evaluative feedback, or 3) define a curriculum of tasks for the agent to learn on. In all cases, our work has shown such information can be effectively leveraged. After giving a high-level overview of this work, we will highlight a set of open questions and suggest where future work could be usefully focused.
Bibtex
@InProceedings{TaylorIJCAI18,
Title = {Improving Reinforcement Learning with Human Input},
Author = {Matthew E. Taylor},
booktitle = {Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI)},
Year = {2018}
}
Related Research
-
Our NeurIPS 2021 Reading List
Our NeurIPS 2021 Reading List
Y. Cao, K. Y. C. Lui, T. Durand, J. He, P. Xu, N. Mehrasa, A. Radovic, A. Lehrmann, R. Deng, A. Abdi, M. Schlegel, and S. Liu.
Computer Vision; Data Visualization; Graph Representation Learning; Learning And Generalization; Natural Language Processing; Optimization; Reinforcement Learning; Time series Modelling; Unsupervised Learning
Research
-
Heterogeneous Multi-task Learning with Expert Diversity
Heterogeneous Multi-task Learning with Expert Diversity
G. Oliveira, and F. Tung.
Computer Vision; Natural Language Processing; Reinforcement Learning
Research
-
Desired characteristics for real-world RL agents
Desired characteristics for real-world RL agents
P. Hernandez-Leal, and Y. Gao.
Research