Retour aux articles
IAOpenAI News

On first-order meta-learning algorithms

OpenAI March 8, 2018 Publication On first-order meta-learning algorithms Read paper (opens in a new window) Loading… Share Abstract This paper considers meta-learning problems, where there is a distributio...

Le flux RSS ne fournissait qu'un extrait. FlowMarket a récupéré le contenu public disponible depuis la page originale, sans contourner les contenus réservés.

March 8, 2018

On first-order meta-learning algorithms

On First Order Meta Learning Algorithms

Abstract

This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously unseen task sampled from this distribution. We analyze a family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first-order derivatives for the meta-learning updates. This family includes and generalizes first-order MAML, an approximation to MAML obtained by ignoring second-order derivatives. It also includes Reptile, a new algorithm that we introduce here, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task. We expand on the results from Finn et al. showing that first-order meta-learning algorithms perform well on some well-established benchmarks for few-shot classification, and we provide theoretical analysis aimed at understanding why these algorithms work.

  • Learning Paradigms

Authors

Related articles

Scaling Laws For Reward Model Overoptimization

Publication Oct 19, 2022

Screenshot of a scene from Minecraft

Conclusion Jun 23, 2022

Group of people posing behind a panel

Publication Dec 13, 2019

Besoin d'un workflow n8n ou d'aide pour l'installer ?

Après la veille, passez à l'action : trouvez un template n8n ou un créateur capable de l'adapter à vos outils.

Source

OpenAI News - openai.com

Voir la publication originale