Retour aux articles
IAGoogle AI Blog

Here’s how our TPUs power increasingly demanding AI workloads.

Learn how Google’s TPUs power increasingly demanding AI workloads with this new video.

Le flux RSS ne fournissait qu'un extrait. FlowMarket a récupéré le contenu public disponible depuis la page originale, sans contourner les contenus réservés.

Here’s how our TPUs power increasingly demanding AI workloads.

Behind the Google products you use every day are custom chips designed for one job: doing math at massive scale. They're called TPUs, or Tensor Processing Units.

We designed TPUs from the ground up more than a decade ago specifically to run AI models. Basically, it takes a lot of math for AI models to work, and TPUs can do complex math super quickly: The newest generation of TPUs can process 121 exaflops of compute power with double the bandwidth of previous generations.

Learn more about these tiny but mighty processors in the video below.

Related stories

Besoin d'un workflow n8n ou d'aide pour l'installer ?

Après la veille, passez à l'action : trouvez un template n8n ou un créateur capable de l'adapter à vos outils.

Source

Google AI Blog - blog.google

Voir la publication originale