AMD introduced a new artificial intelligence chip on Thursday, targeting Nvidia’s dominance in the data center graphics processor market. The Instinct MI325X is set to begin production by the end of 2024, marking AMD's move to compete directly with Nvidia's forthcoming Blackwell chips, which are expected to ship in large volumes early next year.
If AMD’s AI chips are recognized by developers and cloud providers as viable alternatives to Nvidia's, this could lead to pricing pressure on Nvidia, which has maintained high margins due to the demand for its GPUs.
Generative AI technologies, like OpenAI’s ChatGPT, require extensive data center infrastructure packed with GPUs to handle the necessary processing, fueling demand for more AI chip options. Nvidia currently dominates the data center GPU market, but AMD, traditionally second, is working to capture a more significant share of the market, which it estimates will reach $500 billion by 2028.
“AI demand has actually continued to take off and exceed expectations. It’s clear that the rate of investment is continuing to grow everywhere,” said Lisa Su, AMD’s CEO, during the product launch.
While AMD did not announce new cloud or internet customers at the event, it has previously disclosed that companies like Meta and Microsoft use its AI GPUs, and OpenAI employs them for certain applications. Pricing details for the MI325X were not revealed, but the chip is typically sold as part of a complete server system.
AMD is accelerating its product development to release new AI chips annually to remain competitive. The MI325X succeeds the MI300X, which began shipping late last year, and the company has announced plans for future chips, including the MI350 in 2025 and the MI400 in 2026.
The rollout of the MI325X comes at a time when Nvidia is preparing to ship its Blackwell chips, setting the stage for increased competition between the two tech giants in the AI chip market. A successful launch of AMD’s new data center GPU could attract investor interest as more companies look to benefit from the AI surge. While AMD’s stock has increased 20% this year, Nvidia’s has surged by over 175%, with Nvidia holding more than 90% of the market for data center AI chips.
One challenge for AMD is that Nvidia’s chips utilize their proprietary programming language, CUDA, which has become a standard in AI development, effectively locking developers into Nvidia’s ecosystem. In response, AMD has been working on improving its ROCm software to make it easier for developers to switch AI models to AMD’s chips, or "accelerators."
AMD is positioning its AI accelerators as particularly competitive in scenarios where AI models are generating content or making predictions. The company highlighted that its MI325 platform offers up to 40% more inference performance on Meta’s Llama AI model compared to Nvidia’s H200.
Expanding in CPUs
While AI accelerators and GPUs dominate headlines, AMD’s core business remains central processors (CPUs), which power most servers worldwide. AMD reported data center sales of $2.8 billion for the June quarter, with AI chips accounting for $1 billion.
Although AMD holds about 34% of the market for data center CPUs, Intel continues to lead with its Xeon chips. AMD aims to challenge Intel with its newly announced EPYC 5th Gen CPUs, which range from low-power 8-core chips priced at $527 to 192-core processors for supercomputers, costing up to $14,813 each.
These new CPUs are designed to work efficiently with AI workloads, providing the data throughput necessary for GPUs to operate effectively. “Today’s AI is really about CPU capability, especially in data analytics,” said Su.