Leading partner in AI chip design is literally shaping the future in California's Silicon Valley
A brainstorming session on artificial intelligence and AI-driven chip design was held in Silicon Valley last week. Two giants of the industry, Nvidia and Synopsys, held a conference that brought together developers and technology innovators in very different, but complementary ways. Nvidia, world-renowned for its AI acceleration technology and silicon platform superiority, and his Synopsys, a long-time industry leader in semiconductor design tools, IP, and automation, are both currently leading the way in machine learning and artificial intelligence. We are capitalizing on the huge and thriving market opportunity. .
Both companies are launching the proverbial arsenal of enabling technologies, with Nvidia leading the way with large-scale AI accelerator chips and Synopsys helping chip developers leverage AI into many of the tedious steps in the chip design and verification process. I made it. In fact, we may soon not only reach a tipping point in AI, but perhaps even a kind of “starting” point. In other words, which came first, the chicken or the egg? The AI ​​or the AI ​​chip? I know this is science fiction for many people, but I thought it was the highlight of this AI show last week. Let's dig into some of the points I felt.
Synopsys leverages AI in 3D for semiconductor EDA
There's no doubt about it. The folks at Synopsys put Nvidia in the spotlight at his Nvidia GPU Technology conference earlier this week regarding the company's announcements about AI accelerators, which are now the de facto standard in data centers. . But as Nvidia CEO Jensen Huang pointed out on stage with his Synopsys CEO Sassine Ghazi (above), there's a good reason for this. That means nearly every chip Nvidia designs and sends to manufacturing is implemented using Synopsys EDA tools for design, validation, and porting to chip manufacturing plants. But Ghazi's keynote also touched on a new technology from Synopsys called 3DSO.ai that kind of struck me.
Synopsys launched a design space optimization AI tool in 2021. This greatly speeds up the process of place and route or floor planning chip designs. Finding optimal circuit layout and routing in large-scale semiconductor designs is a labor-intensive and complex task that often concerns performance, power efficiency, and silicon cost. Synopsys DSO.ai forces machines to perform this iterative process vigorously, significantly reducing engineering effort and speeding time to market with more optimized chip designs.
Synopsys 3DSO.ai now features multiple design automation layers for the new era of chiplets, in addition to critical design thermal analysis, taking this technology to the next level for modern 3D stacked chiplet solutions. I am. In essence, Synopsys 3DSO.ai is not just playing a kind of Tetris to optimize chip designs; rather, it performs 3D Tetris, optimizing placement and routing in three dimensions, and Provide thermal analysis to ensure physical properties are thermally stable. Feasible or optimal. Yes, that's right. AI-powered 3D chip design – it’s officially amazing.
NVIDIA Blackwell GPU AI accelerator and robotics technology attracts attention
At the GPU Technology Conference, Nvidia is pulling out all the stops again, this time filling the San Jose SAP Center with a slew of developers, press, analysts, and even some of the biggest names in the tech world like Michael Dell. My Analyst Business His partner and long-time friend Marco Chiappetta covers GTC highlights in detail here (also check out AI NIM, it's very interesting). But for me, the star of his Jensen Huang show was the company's new Blackwell GPU architecture for AI and project GR00T for building. A humanoid robot and another AI-powered chip tool called cuLitho that is currently being employed in production by Nvidia and his TSMC. cuLitho's net is that the design of expensive chip mask sets for patterning these designs onto wafers during production has gotten a coveted shot through machine learning and AI. Nvidia claims that its GPUs, combined with his cuLitho models, can improve chip lithography performance by up to 40x and deliver significant power savings compared to traditional CPU-based servers. Masu. And this technology is now in full production, partnering with Synopsys on the design and validation side and TSMC on the manufacturing side.
Now let's talk about Blackwell. If you thought Nvidia's Hopper H100 and H200 GPUs were monster AI silicon engines, Nvidia's Blackwell is like “unleashing the Kraken.” For reference, a single dual-die Blackwell GPU is made up of about 208 billion transistors, more than 2.5 times more than Nvidia's Hopper architecture. However, these dual GPU clusters act as one large GPU communicating through Nvidia's NV-HB1 high-bandwidth fabric, which provides an impressive 10TB/s of throughput. Combine these GPUs with his 192GB of HMB3e memory, which has a peak bandwidth of over 8TB/s, and you're looking at twice the memory of the H100 and twice the bandwidth of his. Nvidia is also combining two Blackwell GPUs with a Grace CPU for a triple AI solution called the Grace Blackwell Superchip (also known as GB200).
Configuring a rack with dual GB200 servers featuring the company's 5th generation NVLink technology, which provides twice the throughput of Nvidia's previous generation, completes the Nvidia GB200 NVL72 AI supercomputer. NVL72 clusters configure up to 36 GB200 superchips within this rack, connected via his NVLink spines on the back. It's a pretty wild design that also consists of Nvidia BlueField 3 data processing units, and the company claims it's 30 times faster than the previous generation's H100-based system at large language model inference with 1 trillion parameters. I claim that there is. The GB200 NVL72 is also claimed to have 25x lower power consumption and 25x better TCO. The company is also configuring up to eight racks of DGX SuperPODs consisting of NVL72 supercomputers. Nvidia announced a number of partners that will adopt Blackwell, including Amazon Web Services, Dell, Google, Meta, Microsoft, and OpenAI. The company vows to bring these new powerful AI GPU solutions to market later this year.
So Nvidia isn't just leading the charge as an 800-pound behemoth of AI processing, it looks like it's just getting ready.
Another area where Nvidia continues to accelerate its execution is robotics, with Jensen Huang's GTC 2024 Robot Show highlighting the Project Gr00T (yes, that's correct with two zeros) foundation model for humanoid robots. It was another wild ride. GR00T, that is Gtotalitarian Rrobot 00 TTechnoscience is about training robots not only for natural language input and conversation, but also to imitate human movements and actions for dexterity and how to navigate and adapt to the changing world around them. . As I said, it's science fiction, but it looks like Nvidia is ready to make it a reality sooner rather than later with his GR00T.
And in fact, this is what impressed me most from both Nvidia and Synopsys while I was in the Valley last week. Problems and workloads that were once considered nearly unsolvable are now being solved and executed at an ever-increasing pace with machine learning. And it has a compounding effect, making great progress year after year. In this fascinating age of technology, I feel lucky to be an observer and guide of sorts, and that's what gets me up in the morning.
follow me twitter Or LinkedIn. check out My website and other works can be found here.