Written by Max A. Charney
(Reuters) – Google's parent company Alphabet on Tuesday unveiled a new member of its family of artificial intelligence data center chips called Trillium, which it says is nearly five times faster than previous versions.
“Industry demand for (machine learning) computers has increased by a million-fold over the past six years, roughly 10-fold every year,” Alphabet CEO Sundar Pichai said in a briefing with reporters. Stated. “I think Google was built for this moment. We've been pioneering[AI chips]for over a decade.”
Alphabet's efforts to build custom chips for AI data centers represent one of the few viable alternatives to Nvidia's top-of-the-line processors, which dominate the market. Together with software that works closely with Google's Tensor Processing Units (TPUs), this chip has allowed the company to capture a significant share of the market.
Nvidia has about 80% of the AI ​​data center chip market, and various versions of Google's TPUs make up most of the remaining 20%. The company does not sell the chips itself, but rents access through its cloud computing platform.
According to Google, the 6th generation Trillium chips will achieve 4.7x better computing performance compared to TPU v5e. The chip is designed to enhance technology that generates text and other media from large models. Trillium processors are 67% more energy efficient than v5e.
The new chip will be available to cloud customers in “late 2024,” the company said.
Google engineers achieved further performance improvements by increasing high-bandwidth memory capacity and overall bandwidth. AI models require huge amounts of advanced memory, which was a bottleneck to further improve performance.
The company designed the chips to be deployed in pods of 256 chips that can scale to hundreds of pods.
(Reporting by Max A. Charney in San Francisco; Editing by Leslie Adler)