Nvidia broadens its reach to ensure that AI delivers on its promises. AMD is building faster chips, faster. The Market appears to prefers the former: Nvidia is up nearly 5%, while AMD is down ~2.5% today.
Which is better? Well Elon Musk said Sunday on X that his xAI start-up will build computing infrastructure to train AI using 100,000 of Nvidia’s H100 chips, which he said would be online “in a few months.” He also said that the “Next big step would probably be ~300k B200s with CX8 networking next summer.” At perhaps $40,000 (a low estimate) each, that would be $12B.
Of course, not everyone is so bullish on AI. An article Sunday in a major media platform says, in brief, that “the pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant.” The author goes on to build his case that “AI is in a massive bubble, a ton of startups will fail, a lot of the value of big tech companies is going to be destroyed, and this level of revenue for NVIDIA is not sustainable.”
I’m not going to argue his points, but I will say that Nvidia’s announcements at Computex are intended to broaden the value AI can deliver to more customers using more applications with faster, more affordable hardware. All this will increase Nvidia’s wallet share of what the company says will become a multi-trillion dollar market.
What Did Nvidia Announce?
While the black-leather-clad Tech Superstar did not announce any breakthrough products, he did paint a picture of a maturing ecosystem of partners and developers that will take AI to the next level, enabling PCs, robots, industrial applications, and collaboration platforms that will undeniably improve productivity. And improved productivity leads to improved profits.
Nvidia CEO Jensen Huang’s speech before the Computex event in Taiwan laid out new software, new partners, and plans for post-Blackwell hardware. In his keynote, Mr. Huang announced that the company is enabling the next industrial revolution, that the company’s ecosystem is broadening to reach new users and use cases. While Jensen did not directly address affordability, the new Broadwell architecture will reduce the cost of AI processing by over ten fold, and cut the amount of energy required to power it.
“The next industrial revolution has begun. Companies and countries are partnering with NVIDIA to shift the trillion-dollar traditional data centers to accelerated computing and build a new type of data center — AI factories — to produce a new commodity: artificial intelligence,” said Huang. “From server, networking and infrastructure manufacturers to software developers, the whole industry is gearing up for Blackwell to accelerate AI-powered innovation for every field.”
For starters, Nvidia wants everyone to realize that the “AI PC” is not a new thing. They have been shipping GeForce RTX for over six years, enabling over 100 million PCs to run over 500 AI games and applications.
Next, Nvidia is doubling down on NIMS, the Nvidia Inference Micro Services it announced at GTC in March. NIMS not only makes it easier to deploy AI in minutes instead of months, they form building blocks from which customers can create new applications and solve new problems. If adopted, NIMS will help accelerate innovation and speed time to value.
Nvidia announced that NIM is now generally available for downloading on ai.nvidia.com and from Hugging Face, and is being integrated into tools for developers and infrastructure. Nvidia also announced that NIMS are now free for developers and researchers. Deploying NIMS in production requires an AI Enterprise license at $4500 per GPU
Huang pointed to the new vendors supporting Nvidia GPUs, with nearly twenty vendors featuring the Blackwell-Grace designs, and over 20 OEMs now supporting the Spectrum X networking solutions, which is now available. Nvidia also introduced a new Grace-Blackwell design, called the GB200 NVL2 for mainstream LLMs, providing a more affordable entry-point for Blackwell development and adoption.
There was a lot of emphasis on robotic factories, and Nvidia highlighted four customer deployments, including Apple manufacturing partner Foxconn.
As for new hardware, Jensen announced that the Blackwell successor, called Rubin, should ship in 2026. No details were provided. This would also be paired with the second generation Nvidia Arm CPU, called Vera. Again, no details on Vera were announced.
In summary, Nvidia announced a slew of new technologies and partnerships, including:
- RTX accelerates new AI Laptops, Project G-Assist, NVIDIA ACE on-device NIMs and more generative AI tools
- Top computer manufacturers unveil Blackwell-powered systems, featuring Grace CPUs New NVIDIA GB200 NVL2 single-node, scale-out systems for mainstream LLMs and data processing
- NVIDIA NIM now available, NVIDIA Developer Program members can access NIM for free for research, development and testing
- NVIDIA delivers next-generation Ethernet for massive AI workloads with Spectrum-X
- Robotic factories accelerate industrial digitalization with NVIDIA AI and Omniverse
- NVIDIA Isaac adopted by industry leaders for development of AI-powered robots
- New enterprise software support for NVIDIA IGX with Holoscan accelerates real-time AI at the industrial edge
AMD Responds
Nvidia’s increased pace of development, from a 2-year cadence to a yearly calendar, is forcing its rivals to react. AMD CEO Lisa Su announced an entire portfolio of new and upcoming products, from PCs to Laptops, to the edges, and data center CPUs and GPus.
Dr. Su announced a new Instinct GPU roadmap, with the MI325X accelerator by the end of this year. Notably, it will support 288GB of HBM3E fast memory. Following that, the MI350 series will debut in 2025, and is expected to perform up to 35 times better in inference compared with the current MI300 series. That will be followed by the MI400 series in 2026. The bottom line, at best, the MI350 will compare with the Nvidia Blackwell GPU about one year after Blackwell NVL72 is expected to ship. At which point Nvidia will be moving on to Rubin.
One piece of technology caught my attention: Block FP16, which can double the performance of floating point math without the quantization work required to use FP8. Looks very promising. And the 5th-gen EPYC, which now commands over 30% share of server CPUs, looks incredible.
Conclusions
While AMD has come a long way with its GPU for AI, the company is still trying to just catch up with Nvidia hardware. Never mind the massive gap in the ecosystem Nvidia enjoys. Yes, AMD sees adding more HBM as a differentiator, but thats not enough to compete with Nvidia’s system-level differentiation. Dr. Su did talk up the Nvidia-free Ultra Accelerator Link consortium, but there is no evidence it will compete with NVLink 5.0, which enables the incredible performance promised with Blackwell.
Meanwhile, Nvidia is driving AI adoption with advancements built on top of CUDA in robotics, manufacturing, 3D collaboration, drug discovery, healthcare, gaming, and enterprise applications.
The hardware battle has just begun, but we don’t see anything outside Google TPU that can really compete with Nvidia.