Back to articles
AIGoogle AI Blog

Here’s how our TPUs power increasingly demanding AI workloads.

Learn how Google’s TPUs power increasingly demanding AI workloads with this new video.

The RSS feed only provided an excerpt. FlowMarket recovered the public content available from the original page without bypassing restricted content.

Here’s how our TPUs power increasingly demanding AI workloads.

Behind the Google products you use every day are custom chips designed for one job: doing math at massive scale. They're called TPUs, or Tensor Processing Units.

We designed TPUs from the ground up more than a decade ago specifically to run AI models. Basically, it takes a lot of math for AI models to work, and TPUs can do complex math super quickly: The newest generation of TPUs can process 121 exaflops of compute power with double the bandwidth of previous generations.

Learn more about these tiny but mighty processors in the video below.

Related stories

Need an n8n workflow or help installing it?

After the briefing, move to execution: find an n8n template or a creator who can adapt it to your tools.

Source

Google AI Blog - blog.google

View original publication