June 5, 2017
UCB exploration via Q-ensembles

Abstract
We show how an ensemble of Q*-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the Q-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark.
- Exploration & Games
- Learning Paradigms
Authors
Related articles

Publication Oct 19, 2022

Conclusion Jun 23, 2022

Publication Dec 13, 2019