As Nvidia Speeds Up, AMD Catches Up, Leaving Intel Behind
The data center accelerator market is heating up, with Nvidia and AMD significantly speeding up their development and release cadences. This rapid pace is creating double trouble for Intel, which is struggling to catch up with its Gaudi AI chips. The semiconductor giant is investing significant resources into the upcoming launch of its Gaudi 3 chip, but it remains behind the performance curve of its main rivals.
Nvidia’s Victory Lap
Nvidia’s recent Computex event was a victory lap of sorts, showcasing its plans to continue dominating the accelerated computing market with more powerful processors and ever-expanding ecosystem support. The company revealed an expanded road map with Blackwell successors set to launch over the next three years, announced a plan to make Blackwell GPUs more accessible through modular server designs, and announced massive ecosystem support for its Nvidia Inference Microservices.
AMD’s Response
AMD, on the other hand, has responded to Nvidia’s accelerated development by hastening its own road map. The chip designer will release data center accelerator chips every year instead of every two years, starting with the Instinct MI325X GPU due out later this year. This move is designed to keep up with Nvidia’s increasingly powerful chips and provide solid competition against Nvidia’s H200 and Blackwell GPUs.
Intel’s Challenges
Intel faces significant challenges in the data center accelerator market. The company’s Gaudi 3 chip is set to launch with air-cooled versions in the third quarter, but it will mainly compete with Nvidia’s H100 GPU, which launched in 2022, and the larger-memory H200 successor that recently started shipping. By the time Gaudi 3 starts finding its way into servers, Nvidia will be close to doing the same with its first Blackwell GPU designs, which are expected to provide levels of performance and efficiency that are magnitudes greater than what the Hopper architecture enables for the H100 and H200.
AMD’s Competitive Advantage
AMD’s Instinct MI325X GPU, set to launch in the fourth quarter, features an HBM3e high-bandwidth memory capacity of 288 GB, which is an area that has become critical in generative AI computing due to the increasingly gigantic models that are used to enable cutting-edge capabilities. This memory capacity is 50 percent higher than the HBM3 capacity of AMD’s MI300X, which already surpasses Gaudi 3’s high-bandwidth memory capacity by 50 percent and has now been shipping for several months with support from major OEMs.
Intel’s Road Ahead
Intel’s road ahead is challenging, with the company expected to launch the successor to Gaudi 3 in late 2025. The company will need to position Falcon Shores competitively to catch up with its rivals. During the company’s third-quarter earnings in April, CEO Pat Gelsinger said the company expected to make more than $500 million this year from Gaudi chips, which pales in comparison to the $4 billion AMD has forecasted for data center GPU revenue in 2024 and the $19.4 billion Nvidia made from data center compute products in the first quarter alone.
Conclusion
The data center accelerator market is a highly competitive space, with Nvidia and AMD significantly speeding up their development and release cadences. Intel faces double trouble in this market, struggling to catch up with its Gaudi AI chips and facing significant challenges in terms of performance and ecosystem support. The company will need to position Falcon Shores competitively to catch up with its rivals and remain a major player in the data center accelerator market.