Sign In

Communications of the ACM

ACM TechNews

Google Says Its AI Supercomputer with TPU v4 Chips Outperforms Nvidia's A100 in Speed

View as: Print Mobile App Share:
Google CEO Sundar Pichai introducing the fourth generation of its Tensor Processing Unit.

In the paper released earlier this week, Google explained how it connected over 4,000 TPUs to create a supercomputer.

Credit: Google

Google claims the supercomputers used for training its artificial intelligence (AI) models are faster and more energy-efficient than those employed by multinational technology firm Nvidia.

Google researchers detailed how they created a supercomputer from more than 4,000 fourth-generation Tensor Processing Units (TPUs), as well as custom optical switches to link individual machines.

The AI models are segmented across thousands of chips, which must collaboratively train the models for weeks or more.

Google's Norm Jouppi and David Patterson explained, "Circuit switching makes it easy to route around failed components. This flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of an ML (machine learning) model."

Google says its new supercomputer is up to 1.7 times faster and 1.9 times "greener" than a system based on Nvidia's A100 chip.

From The Tech Portal (India)
View Full Article


Abstracts Copyright © 2023 SmithBucklin, Washington, D.C., USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account