In these explosive times and soaring stock markets, let's consider where we can find new growth treasures. Nvidia will focus on additional growth opportunities in inference processing, software, edge, and automotive. Nvidia is far from peak AI.
Yes, GTC is a lot of fun. Jensen. black leather. Amazing technology. These are very cool demos. This year has been like a carnival, with almost every AI stock rejoicing in its incredible rise, especially his Nvidia. Here's a summary of what I'm looking for from Jensen and the floor:
I've been attending GTC since 2012 and have watched the conference change from graphics enthusiasts to AI PhDs, from hundreds of attendees to tens of thousands, and from application developers to a wide range of investors and businesses. I've seen it. This year should be different, if for no other reason than the extra trillion dollars the market has given to GPU and AI leaders.
What to look for: Hardware
Yes, new hardware for training and inference is coming, starting with the new Blackwell B100. Perhaps a training and inference processor for LLM following the hugely successful Hopper. However, Nvidia says he will be selling out more of his H100, H200, and especially his GH200, as it is believed that Hopper's capacity (and probably all of his HBM's CoSWoS) will be sold out for the remainder of 2024. I think you will be able to hear the information. Grace Hopper welcomes many customers on stage and promotes its performance and capacity.
The B100 is expected to double its performance, include some model-specific accelerators, and possibly add 25% more HBM capacity, making it profitable in 2025. However, NVIDIA could reap significant profits in 2024 by optimizing the composition of its portfolio. And adding products with fewer supply chain issues will further accelerate growth. More details later.
Here's a point that many people may have missed
Some people have called me to explain how the new Nvidia roadmap won't just drive customer churn instead of increasing revenue. Just like with AMD and Intel CPUs, I believe customers want choice and both companies offer well over 50 individual parts (SKUs). If he's currently using an H100 or H200 in a model, he'll be using that GPU for at least another year in that model. However, the size of the LLM model doubles every three months. So within a year of him using the latest GPUs in your current models, the new AI will be 16 times his size and is about to challenge the size of the human brain. Since then; Jensen predicts that our brains will approach that size within about five years. (Although not a perfect comparison, the AI model is 100 times smaller than today's 100T Synapse human brain.)
GDDR7
I'd be surprised if we don't see something new in generative AI inference processing in addition to the B100. L40S is great, but it may not have enough memory for large-scale LLM inference. His L40S, based on the Lovelace GPU, doesn't use high-bandwidth memory and uses GDDR6, so it's not as supply-constrained as higher-end models. However, his JDEC standard for GDDR7 has been announced and is expected to ship later this year. I'd be surprised if Nvidia didn't announce a new Lovelace GPU to support it, with twice the capacity and twice the performance of the L40S.
Nvidia announced that inference processing in its data centers accounted for about 40% of its revenue last quarter. I think they have a goal for him to be over 60% by the end of 2024. For that, we need a more affordable platform. His Lovelace, powered by GDDR7, may be just that option. (Or something else I couldn't think of!)
Edge AI
Edge AI volume will eventually surpass data center volume, but the H100 explosion is pushing that trend back a bit. Revenue is in the hundreds of millions, whereas data center revenue is in the tens of billions. That said, Nvidia is expected to focus on what's growing outside the walls of its data centers, in the robotics and automotive sectors.
Jetson is still based on the Ampere GPU architecture and has been around for about a year. I wouldn't be surprised if Nvidia upgrades to Hopper, but if you think Nvidia has a price advantage, upgrading to Loveless might also make sense. Either way, expect Jensen to make important statements about the growth of this market. Volkswagen has added his ChatGPT to its latest cars, but keep in mind that this is just the beginning.
When it comes to EVs, NVIDIA frequently updates its vehicle supply pipeline guidance in the GTC, increasing it to $11 billion in 2022 over a six-year opportunity. Given last year's slump in EVs, it will be interesting to see if there is an update to this metric. We're sure to hear from Mercedes-Benz, Jaguar Land Rover, Volvo, Hyundai and BYD about the status of their drive programs.
Software: The next frontier
Nvidia revealed the size of its software business for the first time last quarter. The run rate business grew to his $1 billion. This includes Enterprise AI, Nvidia's comprehensive platform of AI models and tools, and Nvidia Omniverse Enterprise. NVIDIA AI Enterprise is available as a perpetual license for $3,595 per CPU socket, and Enterprise Business Standard Support is available for $899 per license per year. That's nearly $5,000 per user.
This is super sticky. And incredibly valuable. For example, Mercedes is using Omniverse to create a digital twin of its new factory, and will continue to use Omniverse to manage and evolve that factory over time. While Meta has some toys and avatars in its Metaverse world, Nvidia works with real engineers and creators to simulate the digital world that will become reality (using real physics). This is one of the most exciting things Nvidia is working on.
Given the huge success of Nvidia AI hardware, further monetizing its installed base will be a big opportunity for Nvidia going forward. $1 billion is just a starting point.
MediaTek: Wildcard
So if there's one area where Jensen may have unfulfilled ambitions, it has to be mobility. At last year's Computex, Nvidia and MediaTek announced an ongoing partnership in the automotive space. Mediatek will design the SoC and Nvidia will provide the GPU. “This is a combination of two companies with incredibly complementary skills,” Huang said at a press conference. “We are partnering with one of the world's largest and most established SoC companies. We have MediaTek in our pockets and homes, and it's great to partner on this effort.” That’s the thing.”
What I focused on was the “in-out pocket” part. We recently wrote about Reuters reporting that NVIDIA is interested in pursuing a custom silicon project. I mentioned cloud service providers as a likely strategy, but perhaps he's thinking Nvidia intends to help MediaTek's GPU race, perhaps by offering chiplet technology to phone chip makers. There is a possibility.
conclusion
Well, it's fun to speculate, but we'll soon find out what Jensen is doing with the leather sleeves. One thing I can guarantee you is that everyone will be amazed! Find me on the show. Also, be sure to follow my articles on Forbes. thank you!
follow me twitter Or LinkedIn. check out my website.
disclosure: This article expresses the author's opinion and should not be taken as advice to buy or invest in any of the companies mentioned. My company, Cambrian-AI Research, is fortunate to have investors such as BrainChip, Cadence, Cerebras Systems, Esperanto, IBM, Intel, NVIDIA, Qualcomm, Graphcore, SIMA, ai, Synopsys, Tenstorrent, Ventana Microsystems, and many others. . We have no investment position in any of the companies mentioned in this article. For more information, please visit our website https://cambrian-AI.com.