fbpx

The Hottest Chips and a Long-awaited IPO

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

This year’s Hot Chips conference provided once again a mixture of sharp tutorials, exciting disclosures, and timely keynotes. Today, Yole Group details a few of notable take-aways from the conference and share the relevant perceptions of these markets. At the same time, Yole Group’s analysts have a major IPO launching, creating new opportunities for investors in the processor ecosystem.

This article, written by John Lorenz, Senior Technology & Market Analyst at Yole Intelligence, part of Yole Group, is based on numerous analyses such as Status of the Processors IndustryHigh-end performance PackagingMemory-Processor interface:  CXLStatus of the Memory industryProcessor Quarterly Market Monitor

Hot Chips – A Yole Group’s outlook

Our first highlight from Hot Chips is the keynote from Bill Dally, Chief Scientist and SVP of Research at Nvidia, titled “Hardware for Deep Learning”. This presentation gave us a great tracking of the evolution of computing demand for AI acceleration.

What stood out was the parsing between computing improvements from process evolution vs from architecture and software. How the improvements in hardware and software have coincided to chase this growing computational load. The most striking was the relative performance gains achieved by advancements in number representation (~16x), complex instructions (~12.5x), process node (~2.5x), and sparsity (~2x), resulting in a 1000x performance gain in single-chip inference performance over the last 10 years.

This is all prescient for our current moment, where the race for generative AI hardware has driven revenues to new heights, where Yole Group expects 115% revenue growth in 2023 for server GPUs alone.

Next, we turn to a totally different application of AI: Intel tuning the real-time processor performance for increased optimization of CPU power consumption.

This presentation from Efraim Rotem, Intel Fellow and Lead Power and Performance architect in client architecture systems group, showed a method for using an AI algorithm for detecting usage and consumption patterns and responding by throttling core voltage and frequency for different workload types. This method, using dynamic voltage and frequency scaling, revealed the possibility to reduce power consumption in client CPUs by 10-20%.

“While the power consumption of data centers receives much more attention, the implications for increased efficiencies in personal computing can be quite broad, as Yole Group expects 182 million units of CPUs for laptop and desktop to ship in 2023, totaling US$29 billion of revenue for CPU vendors.”
John Lorenz
Senior Technology and Market Analyst for Computing and Software, Semiconductor, Memory and Computing Division, Yole Intelligence (part of the Yole Group)

Of course, we have the growing momentum behind chiplet-based designs. At Yole Group, we define chiplets as a design philosophy that combines two or more discrete dies in a way that disaggregates or duplicates some or all the functions of the SoC.

Of course, we have Zen 2/3/4 from AMD and the coming Meteor Lake products from Intel as high-visibility examples. But,one of the more interesting implementations we see is from Ventana, who is really applying the potential for flexibility that chiplets can bring. Therefore, they have the Veyron V1 aiming to address a data center, automotive, communications, and client computing all with the same 5nm (TSMC) RISC-V chiplet.

What about Arm?

Hot Chips’ conference included two updates from Arm, giving more insight into their newest V2 cores for HPC, ML, and next-generation cloud computing and Compute Subsystem (CSS) N2 cores for accelerated deployment of custom silicon.

The Neoverse V2 platform holds particular relevance today, as it is the architecture found within Nvidia’s Grace CPU Superchip, which is one of the hardware offerings capitalizing on the current boom in server AI acceleration.

At the same time, the CSS N2 platform is fitting that need for lowering the barriers to entry for companies looking to customize their compute hardware.

This is a growing area for companies to explore as we see the generative AI training and inference boom as just one example of the types of specific workloads that are not best served by general purpose compute alone.

And speaking of Arm, the company is fresh off its successful IPO where shares gained 25% in the first day of trading.

The lead-up to the IPO was an opportunity for the general public to have a look into the financial state of things at Arm, and the IP vendor business model is one to behold. Based on FY2023 numbers, the company earned US$2.68 billion in revenue, with 96% gross margin and 25% operating margin.

Apparently designing the “World’s most pervasive CPU architecture” comes with a substantial engineering price tag, as the company expensed 42% of revenue on research and development alone.

The IPO effectively puts a bow on the ongoing saga of Arm’s ownership, where Softbank is finally able to realize a return on their investment after Nvidia’s US$40 billion bid to buy Arm went sour in 2022. As of market close on September 20th, 2023, Arm was holding $54 billion market capitalization, trading at an astronomical P/E ratio of 139.

As summer 2023 comes to a close, we are not left starving for interesting developments in the computing and processor industry.

Follow Yole Group for more insights and analysis into these markets and related technologies as they continue to evolve.

Stay tuned!

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top