Retour aux articles
IAOpenAI News

Scaling laws for reward model overoptimization

OpenAI October 19, 2022 Publication Scaling laws for reward model overoptimization Read paper (opens in a new window) Loading… Share Abstract In reinforcement learning from human feedback, it is common to...

Le flux RSS ne fournissait qu'un extrait. FlowMarket a récupéré le contenu public disponible depuis la page originale, sans contourner les contenus réservés.

October 19, 2022

Scaling laws for reward model overoptimization

Scaling Laws For Reward Model Overoptimization

Abstract

In reinforcement learning from human feedback, it is common to optimize against a reward model trained to predict human preferences. Because the reward model is an imperfect proxy, optimizing its value too much can hinder ground truth performance, in accordance with Goodhart's law. This effect has been frequently observed, but not carefully measured due to the expense of collecting human preference data. In this work, we use a synthetic setup in which a fixed "gold-standard" reward model plays the role of humans, providing labels used to train a proxy reward model. We study how the gold reward model score changes as we optimize against the proxy reward model using either reinforcement learning or best-of-n sampling. We find that this relationship follows a different functional form depending on the method of optimization, and that in both cases its coefficients scale smoothly with the number of reward model parameters. We also study the effect on this relationship of the size of the reward model dataset, the number of reward model and policy parameters, and the coefficient of the KL penalty added to the reward in the reinforcement learning setup. We explore the implications of these empirical results for theoretical considerations in AI alignment.

  • Learning Paradigms

Authors

Related articles

Screenshot of a scene from Minecraft

Conclusion Jun 23, 2022

Ai Written Critiques Help Humans Notice Flaws

Publication Jun 13, 2022

Aligning Language Models To Follow Instructions

Publication Jan 27, 2022

Besoin d'un workflow n8n ou d'aide pour l'installer ?

Après la veille, passez à l'action : trouvez un template n8n ou un créateur capable de l'adapter à vos outils.

Source

OpenAI News - openai.com

Voir la publication originale