Written by Max A. Charney
SAN FRANCISCO (Reuters) – Metaplatforms on Wednesday announced details about its next-generation artificial intelligence accelerator chip.
Why is it important?
Reuters reported earlier this year that Meta plans to introduce a new version of its custom data center chips to handle the increased computing power needed to run AI products at Facebook, Instagram and WhatsApp. The chip, known internally as “Artemis,” will help Meta reduce his reliance on Nvida's AI chip and reduce overall energy costs.
important quotes
“The architecture of this chip is fundamentally focused on providing the right balance of compute, memory bandwidth, and memory capacity to provide ranking and recommendation models,” the company said in a blog post. Stated.
context
The new Meta Training and Inference Accelerator (MTIA) chip is part of the company's broader custom silicon efforts, which also includes considering other hardware systems. Beyond building chips and hardware, Meta has invested heavily in developing the software needed to harness the power of our infrastructure in the most efficient way.
The company is also spending billions of dollars buying Nvidia and other AI chips. CEO Mark Zuckerberg said his company plans to acquire about 350,000 flagship H100 chips from his Nvidia this year. He said Meta, along with other suppliers, plans to accumulate the equivalent of 600,000 H100 chips this year.
numbers
Taiwan Semiconductor Manufacturing Co., Ltd. will produce the new chips using its own 5nm process. According to Meta, the processor has three times the performance of first-generation processors.
what's next
This chip is deployed in data centers and is engaged in providing AI applications. The company said it has several programs underway “aimed at expanding the scope of MTIA, including support for (generative AI) workloads.”
(Reporting by Max A. Charney in San Francisco; Editing by Chris Reese)