Cadence and NVIDIA Expand Partnership to Reinvent Engineering for the Age of AI and Accelerated Computing

15 Apr 2026

Expanded collaboration combines agentic AI, physics-based simulation, and digital twins to accelerate engineering and unlock new levels of productivity across semiconductors, physical AI systems and AI factories

SAN JOSE, Calif.— At CadenceLIVE Silicon Valley 2026, Cadence (Nasdaq: CDNS) announced an expanded partnership with NVIDIA to deliver accelerated solutions across agentic AI, physics-based simulation and digital twins to unlock new levels of productivity and accelerate next‑generation engineering design flows across semiconductor design, physical AI systems and hyperscale AI factories.

By combining Cadence’s leadership in agentic AI-driven design, electronic design automation (EDA) and system design and analysis (SDA) with NVIDIA CUDA-X, AI physics and Omniverse libraries for industrial digital twin solutions, the two companies are redefining engineering productivity across three critical design domains—accelerating innovation at true agent speed.

“Agentic AI and digital twins are reshaping the entire engineering landscape—from semiconductor design to planetary‑scale AI systems,” said Anirudh Devgan, president and chief executive officer, Cadence. “Our expanded collaboration with NVIDIA accelerates the convergence of design and physical realization, connecting the Cadence AgentStack, Physical AI Stack, and AI factory digital twins with NVIDIA’s breakthroughs in accelerated computing to deliver unprecedented speed, accuracy and trust in simulation and system development.

“We are at an inflection point in computing—CUDA-accelerated computing and AI are reinventing the engineering process,” said Jensen Huang, founder and CEO of NVIDIA. “For the first time, we can innovate in the digital world—exploring, testing, and optimizing ideas at unprecedented speed and scale—by building everything as full-fidelity digital twins first. Together, NVIDIA and Cadence are bringing this vision to life—transforming how engineers design, build and operate the world.”

Accelerating Cadence Tools for EDA and SDA

Cadence and NVIDIA are accelerating Cadence EDA and SDA solutions with NVIDIA CUDA-X, AI-physics, Omniverse libraries and the Cadence® Millennium M2000 Supercomputer, powered by NVIDIA AI infrastructure. As part of this expanded collaboration, Cadence will accelerate its wide range of principled solvers and leverage AI physics models to deliver engineering workflows up to 100X speedup.

Cadence EDA and SDA customers and partners, including Ascendence, Argonne National Laboratory, Honda R&D, Samsung and SK Hynix are already leveraging Cadence solutions accelerated by NVIDIA to bring accelerated products to market faster.

AgentStack: Agentic AI for Next-Generation Chip Design

Cadence recently introduced its ChipStack AI Super Agent, which applies agentic AI combined with principled EDA tools to transform semiconductor RTL design and verification. Early deployments at more than 10 leading customers have already demonstrated up to a 10X productivity boost in their design and verification tasks.

Building on this foundation, Cadence today unveiled AgentStack, a head agent designed to orchestrate all aspects of semiconductor and system design. AgentStack extends the ChipStack AI Super Agent’s Mental Model and super-agent architecture beyond RTL and verification into physical design, custom/analog design and migration, to system-level design workflows. AgentStack connects Cadence agents with Cadence EDA platforms that leverage NVIDIA Nemotron and run on NVIDIA accelerated computing for orchestrating long‑running, multi‑agent workflows.

As an early partner, NVIDIA is adopting the AgentStack flow in its semiconductor and system design flows and providing real-world feedback that will help Cadence harden and scale AgentStack for broader industry deployment. This evolution will mark a significant shift from traditional script‑ and GUI‑driven flows to agent‑driven flows that are capable of reasoning over design hierarchies, relationships and protocols, dramatically compressing iteration cycles from days to hours.

Embedded Agentic AI for Physical AI

Beyond semiconductor design, Cadence and NVIDIA are extending their collaboration to embedded agentic AI for physical AI, combining the Cadence Physical AI Stack with NVIDIA robotics simulation libraries and accelerated computing to help close the critical “sim‑to‑real” gap for robots and autonomous machines. By integrating and accelerating Cadence’s high‑fidelity multiphysics simulation and AI workflows with NVIDIA Isaac open-source simulation libraries and Cosmos open-world models, customers gain an end‑to‑end, agent‑orchestrated workflow that links world‑model training, accurate physics, large‑scale scenario testing and continuous real‑world feedback.

At a high level, the joint stack coordinates AI agents across the full lifecycle—from training orchestration, physics surrogate training and policy optimization, to validation and deployment feedback. This workflow spans virtual training in NVIDIA Isaac Sim and Isaac Lab, evaluation through detailed Cadence physics models and mission‑scale scenario simulation in VTD (Virtual Test Drive) and VTDx—its extended high‑fidelity simulation environment for complex, real‑world scenarios.

The results are then deployed on NVIDIA Jetson robotics and edge AI systems, where a live virtual twin enables continuous monitoring and refinement. By embedding accurate physics throughout training, validation and inference, the Cadence–NVIDIA flow is designed to greatly accelerate experimentation while improving safety and confidence when physical AI systems are deployed in the real world.

AI Factory Digital Twins to Achieve Lowest Cost per Token

The collaboration also extends to AI factories, where Cadence integrates the NVIDIA Omniverse DSX Blueprint to enable next-generation AI factory digital twins that will help customers design, simulate and optimize large‑scale Vera Rubin and Grace Blackwell AI factories for training and inference. These AI factory digital twins focus on a critical new metric for hyperscale AI: tokens per watt, or the number of model tokens processed per each unit of power consumed.

Using Cadence system analysis and data center simulation tools in combination with NVIDIA DSX libraries and the Omniverse DSX blueprint, customers can explore tradeoffs in GPU power settings, system configurations and cooling architectures before deploying physical systems. In a joint 10-megawatt (MW) AI factory use case, modeling GPU operation at a reduced power (MaxQ) demonstrated up to 17% more tokens per watt and billions of dollars of incremental annual revenue per gigawatt for large‑scale deployments, increasing net annual revenue and underscoring the value of simulation‑driven design for AI factories.

Digital twins of NVIDIA DSX-based AI factories have also demonstrated that combining MaxQ operation with warmer coolant could yield roughly 32% more tokens per watt. By capturing the interactions between IT load, cooling systems, airflow and control logic in a high‑fidelity digital twin, operators can safely push their AI factories toward maximum tokens per watt while respecting power and thermal constraints.

CadenceLIVE Showcase

Cadence will highlight Cadence accelerated solutions, AgentStack, the Physical AI Stack and the AI factory digital twin solutions, along with the expanded collaboration with NVIDIA, at CadenceLIVE 2026, where customers will see how the two companies are helping engineering teams move from early concepts and training to deployment more quickly and confidently.

About Cadence

Cadence is a market leader in AI and digital twins, pioneering the application of computational software to accelerate innovation in the engineering design of silicon to systems. Our design solutions, based on Cadence’s Intelligent System Design strategy, are essential for the world’s leading semiconductor and systems companies to build their next-generation products from chips to full electromechanical systems that serve a wide range of markets, including hyperscale computing, mobile communications, automotive, aerospace, industrial, life sciences and robotics. In 2024, Cadence was recognized by the Wall Street Journal as one of the world’s top 100 best-managed companies. Cadence solutions offer limitless opportunities—learn more at www.cadence.com.

© 2026 Cadence Design Systems, Inc. All rights reserved worldwide. Cadence, the Cadence logo, and the other Cadence marks found at www.cadence.com/go/trademarks are trademarks or registered trademarks of Cadence Design Systems, Inc. All other trademarks are the property of their respective owners.

For more information, please contact:
Cadence Newsroom
408-944-7039
[email protected]

Source: Cadence Design Systems, Inc. essential edge devices.

Purpose-engineered for value, Intel Core Series 3 is built on the proven foundations of Intel Core Ultra Series 3 (code-name Panther Lake) and manufactured on the Intel 18A process node technology, the most advanced logic node developed and manufactured in the United States. The new processors are designed to transform computing for schools, small businesses, and value buyers, delivering the features people care about at unmatched scale. Over 70 designs from leading partners offering a choice of features and form factors will launch in the coming months.

What It Offers: Intel Core Series 3 presents an unmatched upgrade opportunity for small businesses and home users on a typical five-year upgrade cycle. Versus a five-year-old PC, Intel Core Series 3 delivers up to 47% better single thread performance1, up to 41% better multi thread performance2, and up to 2.8x better GPU AI performance3. These gains enable a new class of systems that raise expectations for what everyday computing can deliver.

Other notable new specifications and features include:

  • Intel Core Series 3 is Intel’s first hybrid AI-ready Core Series processor, supporting AI workloads with up to 40 platform TOPS.
  • Support for modern connectivity, including up to two integrated Thunderbolt™ 4 ports, Intel® Wi-Fi 7 (R2), and Intel® Bluetooth® 6.
  • Intel Core Series 3 is designed for all day battery life4 and everyday productivity, with up to 2.1x faster creation and productivity5, up to 64% lower processor power6, and up to 2.7X AI GPU performance versus previous generation Intel Core 7 150U processors7.

Beyond the laptop, Intel Core Series 3 brings Intel innovation to essential edge deployments—from robotics and smart buildings to point-of-sale (POS) terminals and smart metering—delivering the right balance of performance, AI capability, and power efficiency for diverse real-world use cases.

Intel Core Series 3 processors scale from top-tier edge intelligence with integrated AI acceleration for vision and speech AI down to essential edge compute with cost-optimized, reliable compute. Compared to Nvidia Jetson Orin Nano, Intel Core 7 350 delivers up to 1.5x higher object detection performance8, up to 1.9x faster image classification9, and up to 2.2x higher performance for video analytics10.

When It’s Available: Intel Core Series 3 powered systems for consumer and commercial will be available from our OEM partners throughout the year, starting today, April 16, 2026. Refer to your preferred OEM vendor for specific system availability. Edge systems powered by Intel Core Series 3 will be available starting Q2 2026.

Partner systems launching today and later this year include:

Acer

Asus

Colorful

Dell Technologies

Hasee

Haier

Honor

HP

Infinix

Lenovo (Coming Soon)

ThinkCentre neoMi

Mechrevo

MSI

Positivo

Samsung (Coming Soon)

Tecno

Wiko


More Context: Intel Core Series 3 Press Deck | 
Intel Core Series 3 Product Page | AI Playground Essentials for Intel Core Series 3

The Small Print:

Performance varies by use, configuration and other factors. Learn more at www.intel.com/PerformanceIndex.

While Wi-Fi 7 is backward compatible with previous generations, new Wi-Fi 7 features require PCs configured with Intel Wi-Fi 7 solutions, PC OEM enabling, operating system support, and use with appropriate Wi-Fi 7 routers/APs/gateways. 6 GHz Wi-Fi 7 may not be available in all regions. More details at www.Intel.com/performance-wireless.

1-10 See press deck appendix for configuration details.

1 As measured by Cinebench 2024 Single Core. Intel Core 7 360 (PL1=15W) tested in Intel reference platform vs. Intel Core i7-1185G7 tested in Lenovo ThinkPad X1 Gen9 14 (PL1=20W). Results may vary.

2 As measured by Cinebench 2024 Multi Core. Intel Core 7 360 (PL1=15W) tested in Intel reference platform vs. Intel Core i7-1185G7 tested in Lenovo ThinkPad X1 Gen9 14 (PL1=20W). Results may vary.

As measured by Geekbench AI 1.6 GPU FP16. Intel Core 7 360 (PL1=15W) tested in Intel reference platform vs. Intel Core i7-1185G7 tested in Lenovo ThinkPad X1 Gen9 14 (PL1=20W). Results may vary.

4 “Designed for All Day Battery Life”: refers to laptops powered by Intel® Core™ Series 3 processors with minimum battery size and power efficient designs that leverage Intel’s latest architecture, advanced compute technology and power optimizations that combine to deliver extended battery life while performing office multitasking, video playback, web browsing, and standby time in a typical consumer PC usage scenario and realistic environment.

As measured by UL Procyon AI Computer Vision benchmark using OpenVINO; UL Procyon Office Productivity Overall Score; PugetBench Lightroom Classic; Cinebench 2026 Single Thread; PugetBench Photoshop; WebXPRT 5 (Chrome v.145). Intel Core 7 360 (PL1=15W) tested in Intel reference platform vs. Intel Core 7 150U (Raptor Lake Refresh, PL1=15W) tested in Intel reference platform. Results may vary.

6 As measured by processor power during YouTube 4K streaming workload. Intel Core 7 360 (PL1=15W) tested in Intel reference platform vs. Intel Core 7 150U (Raptor Lake Refresh, PL1=15W) tested in Intel reference platform. Results may vary.

7 As measured by Geekbench AI 1.6 GPU FP16. Intel Core 7 360 (PL1=15W) tested in Intel reference platform vs. Intel Core 7 150U (Raptor Lake Refresh, PL1=15W) tested in Intel reference platform. Results may vary.

8 As measured by inferences per second with yolo_v5m, INT8, BS8, TDP = 15W with GPU.  Performance varies by use, configuration and other factors.

9 As measured by inferences per second with mobilenet-v2, INT8, BS8, TDP = 15W with GPU. Performance varies by use, configuration and other factors.

10 As measured by video streams at 1080p30, TDP = 15W.  Medium AI Pipeline: Media Decode (1080p30 HEVC) + Preprocessing + Yolov5m_640x640 @ 10fps + Tracking + Resnet-50 @ 10 ips. Reference workload available on Githubhttps://github.com/open-edge-platform/edge-workloads-and-benchmarks. Performance varies by use, configuration and other factors.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top