To meet the evolving performance and power-efficiency needs of generative AI (GenAI) and Physical AI models targeting for AI-driven SoCs, Synopsys has announced an enhanced version of its silicon-proven ARC® NPX6 NPU IP family of AI accelerators. These enhanced NPU IPs, which are software-compatible with existing NPX6 IP families, include:
AI Data Compression: An enhanced hardware option supports input and output of pre-quantized, block-based data types, reducing memory footprint and bandwidth pressure for GenAI and Physical AI networks
Load and Run: A new, simplified way for AI developers to run GenAI and LLM models on NPX. Simply load pre-quantized models and execute directly on the NPX using standard LLM APIs, simplifying development and accelerating time to market
Power Reduction: Fine-grain control on voltage variations reduces the cost of power delivery networks (PDN) and shortens time to physical design closure/
Discover more with ARC® NPX6 NPU Processor IP:
- Watch Video: Enhanced NPX6 NPU IP tackles physical AI
- Read Article: Why Next Generation NPUs Are Essential for Physical AI
- Visit us at Embedded Vision Summit
- Download NPX6 NPU IP Datasheet

