GlobalFoundries, a company that makes chips for companies such as AMD and General Motors, previously announced a partnership with Lightmatter. Harris said his company “works with some of the world's largest semiconductor companies and hyperscalers,” referring to the largest cloud companies such as Microsoft, Amazon and Google.
If Lightmatter or other companies can reinvent the wiring of big AI projects, it could eliminate a key bottleneck in developing smarter algorithms. The use of more computing is the basis for the advances that led to ChatGPT, and many AI researchers believe that further scaling up of hardware will lead to future advances in this field, and reaching vaguely specified goals for artificial intelligence. We see it as essential to our hopes of doing so. General intelligence (AGI) means a program that matches or exceeds biological intelligence in all respects.
Lightmatter CEO Nick Harris says linking a million chips with light could enable algorithms that are several generations beyond today's state-of-the-art. “Passage enables his AGI algorithm,” he confidently suggests.
The massive data centers needed to train huge AI algorithms typically consist of racks filled with tens of thousands of computers running specialized silicon chips and a spaghetti spaghetti of mostly electrical connections between them. I am. Maintaining AI training runs across so many systems, all connected by wires and switches, is a massive engineering task. The conversion between electronic and optical signals imposes fundamental limits on a chip's ability to perform computations as a unit.
Lightmatter's approach is designed to simplify complex traffic within AI data centers. “Typically there's a bunch of his GPUs, layers of switches, layers of switches, layers of switches, and you have to traverse that tree to communicate between two GPUs,” Harris says. In a Passage-connected data center, every GPU will have a high-speed connection to every other chip, Harris said.
Lightmatter's work with Passage is an example of how the recent rise in AI has inspired companies large and small to reinvent the critical hardware behind advances like OpenAI's ChatGPT . His Nvidia, a leading supplier of GPUs for AI projects, held its annual conference last month, where CEO Jensen Huang presented his company's latest chip for AI training, his GPU called Blackwell. Announced. Nvidia will sell his GPU as a “superchip” consisting of two Blackwell GPUs and a traditional CPU processor. All of these are connected using the company's new high-speed communication technology called NVLink-C2C.
The chip industry is famous for finding ways to squeeze more computing power out of chips without making them bigger, but Nvidia has chosen to buck that trend. The Blackwell GPU inside the company's superchip is twice as powerful as its predecessor, but because it's made from two chips bolted together, it draws more power. This tradeoff means that in addition to Nvidia's efforts to glue chips together with high-speed links, upgrades to other key components of AI supercomputers, such as those proposed by Lightmatter, could become more important. Suggests.