Retour aux articles
IAOpenAI News

Hindsight Experience Replay

OpenAI July 5, 2017 Publication Hindsight Experience Replay Read paper (opens in a new window) Loading… Share Abstract Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning...

Le flux RSS ne fournissait qu'un extrait. FlowMarket a récupéré le contenu public disponible depuis la page originale, sans contourner les contenus réservés.

July 5, 2017

Hindsight Experience Replay

Hindsight Experience Replay

Abstract

Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum.

We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.

  • Learning Paradigms

Authors

Related articles

Scaling Laws For Reward Model Overoptimization

Publication Oct 19, 2022

Screenshot of a scene from Minecraft

Conclusion Jun 23, 2022

Group of people posing behind a panel

Publication Dec 13, 2019

Besoin d'un workflow n8n ou d'aide pour l'installer ?

Après la veille, passez à l'action : trouvez un template n8n ou un créateur capable de l'adapter à vos outils.

Source

OpenAI News - openai.com

Voir la publication originale