Back to articles
AIOpenAI News

On first-order meta-learning algorithms

OpenAI March 8, 2018 Publication On first-order meta-learning algorithms Read paper (opens in a new window) Loading… Share Abstract This paper considers meta-learning problems, where there is a distributio...

The RSS feed only provided an excerpt. FlowMarket recovered the public content available from the original page without bypassing restricted content.

March 8, 2018

On first-order meta-learning algorithms

On First Order Meta Learning Algorithms

Abstract

This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously unseen task sampled from this distribution. We analyze a family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first-order derivatives for the meta-learning updates. This family includes and generalizes first-order MAML, an approximation to MAML obtained by ignoring second-order derivatives. It also includes Reptile, a new algorithm that we introduce here, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task. We expand on the results from Finn et al. showing that first-order meta-learning algorithms perform well on some well-established benchmarks for few-shot classification, and we provide theoretical analysis aimed at understanding why these algorithms work.

  • Learning Paradigms

Authors

Related articles

Scaling Laws For Reward Model Overoptimization

Publication Oct 19, 2022

Screenshot of a scene from Minecraft

Conclusion Jun 23, 2022

Group of people posing behind a panel

Publication Dec 13, 2019

Need an n8n workflow or help installing it?

After the briefing, move to execution: find an n8n template or a creator who can adapt it to your tools.

Source

OpenAI News - openai.com

View original publication