fbpx

AMD Extends AI and High-Performance Leadership with New AMD Instinct, Ryzen and EPYC Processors at Computex 2024

  • Expanded AMD Instinct accelerator roadmap brings annual cadence of leadership AI accelerators; next generation AMD EPYC processors to extend data center CPU leadership

  • New AMD Ryzen AI 300 Series laptop and AMD Ryzen 9000 Series desktop processors deliver leading performance for Copilot+ PCs, gaming, content creation and productivity

TAIPEI, Taiwan, June 02, 2024 (GLOBE NEWSWIRE) — Today at the Computex 2024 opening keynote, AMD (NASDAQ: AMD) detailed new leadership CPU, NPU and GPU architectures powering end-to-end AI infrastructure from the data center to PCs. AMD unveiled an expanded AMD Instinct™ accelerator roadmap, introducing an annual cadence of leadership AI accelerators including the new AMD Instinct MI325X accelerator with industry-leading memory capacity planned to be available in Q4 2024. AMD also previewed 5th Gen AMD EPYC™ server processors, on track to launch in 2H 2024, with leadership performance and efficiency. AMD announced AMD Ryzen™ AI 300 Series, the third generation of AMD AI-enabled mobile processors, and AMD Ryzen 9000 Series processors for laptop and desktop PCs, respectively.

“This is an incredibly exciting time for AMD as the rapid and accelerating adoption of AI is driving increased demand for our high-performance computing platforms,” said Dr. Lisa Su, chair and CEO. “At Computex, we were proud to be joined by Microsoft, HP, Lenovo, Asus and other strategic partners to launch our next-generation Ryzen desktop and notebook processors, preview the leadership performance of our next-generation EPYC processors, and announce a new annual cadence for AMD Instinct AI accelerators.”

“We are in the midst of a massive AI platform shift, with the promise to transform how we live and work. That’s why our deep partnership with AMD, which has spanned multiple computing platforms, from the PC to custom silicon for Xbox, and now to AI, is so important to us,” said Satya Nadella, Chairman and CEO of Microsoft. “We are excited to partner with AMD to deliver these new Ryzen AI powered Copilot+ PCs. We are very committed to our collaboration with AMD and we’ll continue to push AI progress forward together across the cloud and edge to bring new value to our joint customers.”

Delivering Leadership AI and Enterprise Compute for the Data Center

AMD detailed its expanded multi-generational accelerator roadmap, showing how it plans to deliver performance and memory leadership on an annual cadence for generative AI. The expanded roadmap includes the AMD Instinct MI325X accelerators, with planned availability in Q4 2024, delivering industry leading memory capacity with 288GB of ultra-fast HBM3E memory1 that extends AMD generative AI performance leadership2. The next-generation AMD CDNA™ 4 architecture, expected in 2025, will power the AMD Instinct MI350 Series and is expected to drive up to 35x better AI inference performance compared to AMD Instinct MI300 Series with AMD CDNA 33. Continuing performance and feature improvements, the CDNA “Next” architecture will power MI400 series accelerators planned for 2026.

Previewed today at Computex, 5th Gen AMD EPYC processors (codenamed “Turin”) will leverage the “Zen 5” core and continue the leadership performance and efficiency of the AMD EPYC processor family. 5th Gen AMD EPYC processors are targeted for availability in 2H of 2024.

At the keynote, Microsoft CEO Satya Nadella highlighted how AMD Instinct MI300X accelerators deliver leading price/performance on GPT-4 inference for Microsoft Azure workloads.

Reimagining the PC to Enable Intelligent, Personal Experiences

Dr. Su was joined by executives from Microsoft, HP®, Lenovo® and Asus to unveil new PC experiences powered by 3rd Gen AMD Ryzen AI 300 Series processors and AMD Ryzen 9000 Series desktop processors.

AMD detailed its next generation “Zen 5” CPU core, built from the ground up for leadership performance and energy efficiency spanning from supercomputers and the cloud to PCs. AMD also unveiled the AMD XDNA™ 2 NPU core architecture that delivers 50 TOPs of AI processing performance4 and up to 2x projected power efficiency for generative AI workloads compared to the prior generation5. The AMD XDNA 2 architecture-based NPU is the industry’s first and only NPU supporting advanced Block FP16 data type6, delivering increased accuracy compared to lower precision data types used by competitive NPUs without sacrificing performance. Together, “Zen 5,” AMD XDNA 2 and AMD RDNA™ 3.5 graphics enable next-gen AI experiences in laptops powered by AMD Ryzen AI 300 Series processors.

On stage at Computex, ecosystem partners showcased how they are working with AMD to unlock new AI experiences for PCs. Microsoft highlighted its longstanding partnership with AMD and announced AMD Ryzen AI 300 Series processors exceed Microsoft’s Copilot+ PC requirements. HP unveiled new Copilot+ PCs powered by AMD, including the HP Pavilion Aero, and demonstrated image generator Stable Diffusion XL Turbo running locally on an HP laptop powered by a Ryzen AI 300 Series processor. Lenovo revealed upcoming consumer and commercial laptops powered by Ryzen AI 300 Series processors and highlighted how it is leveraging Ryzen AI to enable new Lenovo AI software. Asus showcased a broad portfolio of AI PCs for business users, consumers, content creators and gamers powered by Ryzen AI 300 Series processors.

AMD also unveiled the AMD Ryzen 9000 Series desktop processors based on the “Zen 5” architecture, delivering leadership performance in gaming, productivity and content creation. AMD Ryzen 9 9950X processors are the world’s fastest consumer desktop processors7.

Separately, AMD also announced the AMD Radeon™ PRO W7900 Dual Slot workstation graphics card, optimized to deliver scalable AI performance for platforms supporting multiple GPUs. AMD also unveiled AMD ROCm™ 6.1 for AMD Radeon GPUs, designed to make AI development and deployment with AMD Radeon desktop GPUs more compatible, accessible and scalable.

Powering the Next Wave of Edge AI Innovation

AMD showcased how its AI and adaptive computing technology is powering the next wave of AI innovation at the edge. Only AMD combines all the IP required for whole edge AI application acceleration. The new AMD Versal™ AI Edge Series Gen 2 brings together FPGA programmable logic for real-time pre-processing, next-gen AI Engines powered by XDNA technology for efficient AI inference, and embedded CPUs for post-processing to deliver the highest performing single chip adaptive solution for edge AI. AMD Versal AI Edge Gen 2 devices are available now for early access with over 30 key partners currently in development.

AMD showcased how it is enabling AI at the edge across verticals, including:

  • Illumina is using advanced AMD technology to unlock the power of genome sequencing.
  • Subaru is using AMD Versal AI Edge Gen 2 devices to power its EyeSight ADAS Platform to help enable Subaru’s “zero-fatalities” mission by 2030.
  • Canon uses the Versal AI Core series for its Free Viewpoint Video System, revolutionizing viewing experience for live sport broadcasts and webcasts.
  • Hitachi Energy’s HVDC protection relays predict electrical overvoltage using AMD adaptive computing technology for real-time processing.

Supporting Resources

  • Watch the full keynote here
  • Learn more about all the Computex news here

About AMD

For more than 50 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. Billions of people, leading Fortune 500 businesses and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) website, blog, LinkedIn and X pages.

1 MI300-48 – Calculations conducted by AMD Performance Labs as of May 22nd, 2024, based on current specifications and /or estimation. The AMD Instinct™ MI325X OAM accelerator is projected to have 288GB HBM3e memory capacity and 6 TFLOPS peak theoretical memory bandwidth performance. Actual results based on production silicon may vary.
The highest published results on the NVidia Hopper H200 (141GB) SXM GPU accelerator resulted in 141GB HBM3e memory capacity and 4.8 TB/s GPU memory bandwidth performance.
https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200-datasheet-3002446
The highest published results on the NVidia Blackwell HGX B100 (192GB) 700W GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance.
https://resources.nvidia.com/en-us-blackwell-architecture?_gl=1*1r4pme7*_gcl_aw*R0NMLjE3MTM5NjQ3NTAuQ2p3S0NBancyNkt4we know QmhCREVpd0F1NktYdDlweXY1dlUtaHNKNmhPdHM4UVdPSlM3dFdQaE40WkI4THZBaWFVajFyTGhYd3hLQmlZQ3pCb0NsVElRQXZEX0J3RQ..*_gcl_au*MTIwNjg4NjU0Ny4xNzExMDM1NTQ3
The highest published results on the NVidia Blackwell HGX B200 (192GB) GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance.
https://resources.nvidia.com/en-us-blackwell-architecture?_gl=1*1r4pme7*_gcl_aw*R0NMLjE3MTM5NjQ3NTAuQ2p3S0NBancyNkt4QmhCREVpd0F1NktYdDlweXY1dlUtaHNKNmhPdHM4UVdPSlM3dFdQaE40WkI4THZBaWFVajFyTGhYd3hLQmlZQ3pCb0NsVElRQXZEX0J3RQ..*_gcl_au*MTIwNjg4NjU0Ny4xNzExMDM1NTQ3

2 MI300-49: Calculations conducted by AMD Performance Labs as of May 28th, 2024 for the AMD Instinct™ MI325X GPU at 2,100 MHz peak boost engine clock resulted in 1307.4 TFLOPS peak theoretical half precision (FP16), 1307.4 TFLOPS peak theoretical Bfloat16 format precision (BF16), 2614.9 TFLOPS peak theoretical 8-bit precision (FP8), 2614.9 TOPs INT8 floating-point performance. Actual performance will vary based on final specifications and system configuration.
Published results on Nvidia H200 SXM (141GB) GPU: 989.4 TFLOPS peak theoretical half precision tensor (FP16 Tensor), 989.4 TFLOPS peak theoretical Bfloat16 tensor format precision (BF16 Tensor), 1,978.9 TFLOPS peak theoretical 8-bit precision (FP8), 1,978.9 TOPs peak theoretical INT8 floating-point performance. BFLOAT16 Tensor Core, FP16 Tensor Core, FP8 Tensor Core and INT8 Tensor Core performance were published by Nvidia using sparsity; for the purposes of comparison, AMD converted these numbers to non-sparsity/dense by dividing by 2, and these numbers appear above.

3 MI300-55: Inference performance projections as of May 31, 2024 using engineering estimates based on the design of a future AMD CDNA 4-based Instinct MI350 Series accelerator as proxy for projected AMD CDNA™ 4 performance. A 1.8T GPT MoE model was evaluated assuming a token-to-token latency = 70ms real time, first token latency = 5s, input sequence length = 8k, output sequence length = 256, assuming a 4x 8-mode MI350 series proxy (CDNA4) vs. 8x MI300X per GPU performance comparison. Actual performance will vary based on factors including but not limited to final specifications of production silicon, system configuration and inference model and size used

4 Trillions of Operations per Second (TOPS) for an AMD Ryzen processor is the maximum number of operations per second that can be executed in an optimal scenario and may not be typical. TOPS may vary based on several factors, including the specific system configuration, AI model, and software version. GD-243.

5 Based on performance and power estimates correlated to measurements on hardware platforms as of May 2024 comparing projected Stable Diffusion iterations per second per watt for Ryzen AI 300 Series processor to a Ryzen 8945HS processor. Configuration for Ryzen AI 300 Series Processor: Reference platform, 32GB RAM, Radeon 890M graphics, Windows 11 Pro. Configuration for the Ryzen 8945HS processor is: Razer Blade 14, 32GB RAM, Radeon 780M graphics, Windows 11 Home. Specific projections are are subject to change when final products are released in market. STX-14.

6 As of May 2024, AMD has the first available NPU on a laptop PC processor (AMD Ryzen™ AI 300 Series processor) that supports Block FP16 functionality, where ‘dedicated AI engine’ is defined as an AI engine that has no function other than to process AI inference models and is part of the x86 processor die. STX-16.

7 Testing as of May 2024 by AMD Performance Labs on test systems configured as follows: AMD Ryzen 9 9950X system:
GIGABYTE X670E AORUS MASTER,
Balanced,
DDR5-6000,
Radeon RX 7900 XTX, VBS=On,SAM=On
KRACKENX63 (May 10, 2024) vs. similarly configured Intel Core i9-14900KS system:
MSI MEG Z790 ACE MAX (MS-7D86),
Balanced,
DDR5-6000,
Radeon RX 7900 XTX, VBS=On,SAM=On
KRAKENX63 (May 13, 2024) {Profile=Intel Default} on the following applications: 3DMarkDandia, Blender, Cinebench, GeekBench, PCMark10, PassMark, ProcyonOffice, ProcyonPhotoEditing, ProcyonVideoEditing. System manufacturers may vary configurations, yielding different results. GNR-01.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top