Retour aux articles
IAOpenAI News

Scaling laws for neural language models

OpenAI January 23, 2020 Publication Scaling laws for neural language models Read paper (opens in a new window) Loading… Share Abstract We study empirical scaling laws for language model performance on the...

Le flux RSS ne fournissait qu'un extrait. FlowMarket a récupéré le contenu public disponible depuis la page originale, sans contourner les contenus réservés.

January 23, 2020

Scaling laws for neural language models

Scaling Laws For Neural Language Models

Abstract

We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.

  • Compute Scaling

Authors

Related articles

Deep Double Descent

Publication Dec 5, 2019

How AI Training Scales

Milestone Dec 14, 2018

Jetbrains > Hero > Media item > Asset

Mar 21, 2024

Besoin d'un workflow n8n ou d'aide pour l'installer ?

Après la veille, passez à l'action : trouvez un template n8n ou un créateur capable de l'adapter à vos outils.

Source

OpenAI News - openai.com

Voir la publication originale