Expedera

Edge AI and Vision Alliance Announces 2026 Product Award Winners

Intel, Expedera, Synaptics, ENERZAi, Visionary.ai, SiMa.ai, Nota AI, John Deere and Starkey Take Honors   April 20, 2026 SANTA CLARA, CA, UNITED STATES, /EINPresswire.com/ — The Edge AI and Vision Alliance today announced the 2026 winners of its Edge AI and Vision Product of the Year Awards and AI Innovation Awards. The Product of the […]

Edge AI and Vision Alliance Announces 2026 Product Award Winners Read More +

2026 Edge AI and Vision Product of the Year Award Winner Showcase: Expedera (Edge AI Processor IP)

Expedera’s Origin Evolution NPU IP has been awarded the 2026 Edge AI and Vision Product of the Year Award in the Edge AI Processors IP category. Origin Evolution is a unique IP, specifically designed for the evolving needs of LLMs and VLMs at the edge while maintaining full compatibility with popular legacy networks like CNNs

2026 Edge AI and Vision Product of the Year Award Winner Showcase: Expedera (Edge AI Processor IP) Read More +

“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentation from Expedera

Ramteja Tadishetti, Principal Software Engineer at Expedera, presents the “Evolving Inference Processor Software Stacks to Support LLMs” tutorial at the May 2025 Embedded Vision Summit. As large language models (LLMs) and vision-language models (VLMs) have quickly become important for edge applications from smartphones to automobiles, chipmakers and IP providers have struggled with how to adapt

“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentation from Expedera Read More +

Expedera’s Origin Evolution NPU IP Brings Generative AI to Edge Devices

Origin Evolution NPU IP uses Expedera’s unique packet-based architecture to achieve unprecedented NPU efficiency. Highlights Expedera launches its Origin Evolution™ NPU IP, bringing hardware acceleration to meet the computational demands of running LLMs on resource-constrained edge devices. New purpose-built hardware and software architecture runs LLMs and traditional neural networks with ultra-efficient PPA, providing fully scalable

Expedera’s Origin Evolution NPU IP Brings Generative AI to Edge Devices Read More +

“Challenges and Solutions of Moving Vision LLMs to the Edge,” a Presentation from Expedera

Costas Calamvokis, Distinguished Engineer at Expedera, presents the “Challenges and Solutions of Moving Vision LLMs to the Edge” tutorial at the May 2024 Embedded Vision Summit. OEMs, brands and cloud providers want to move LLMs to the edge, especially for vision applications. What are the benefits and challenges of doing so? In this talk, Calamvokis

“Challenges and Solutions of Moving Vision LLMs to the Edge,” a Presentation from Expedera Read More +

Expedera’s Packet-based AI Processing Architecture: An Introduction

This blog post was originally published at Expedera’s website. It is reprinted here with the permission of Expedera. Most NPUs available today are not actually optimized for AI processing. Rather, they are variations of former CPU, GPU, or DSP designs. Every neural network has varying processing and memory requirements and offers unique processing challenges that

Expedera’s Packet-based AI Processing Architecture: An Introduction Read More +

indie Semiconductor Announces Strategic Investment in AI Processor Leader Expedera

Partnership Capitalizes on Expedera’s Breakthrough AI Capabilities in Support of indie’s ADAS Portfolio Leverages indie’s Multi-modal Sensing Technology and Expedera’s Scalable NPU IP Yields Class-leading Edge AI Performance at Ultra-low Power and Low Latency March 20, 2024 04:05 PM Eastern Daylight Time–ALISO VIEJO, Calif. & SANTA CLARA, Calif.–(BUSINESS WIRE)–indie Semiconductor, Inc. (Nasdaq: INDI), an Autotech

indie Semiconductor Announces Strategic Investment in AI Processor Leader Expedera Read More +

Expedera NPUs Run Large Language Models Natively on Edge Devices

Highlights Expedera NPU IP adds native support for LLMs, including stable diffusion Origin NPUs deliver the power-performance profile needed to run LLMs on edge and portable devices SANTA CLARA, Calif., Jan. 8, 2024 /PRNewswire/ — Expedera, Inc, a leading provider of customizable Neural Processing Unit (NPU) semiconductor intellectual property (IP), announced today that its Origin

Expedera NPUs Run Large Language Models Natively on Edge Devices Read More +

AI Silicon IP Provider Expedera Opens R&D Office in Singapore

Highlights Expedera opens a new Singapore R&D center The company’s fifth development center, with additional locations in Santa Clara (USA), Bath (UK), Shanghai, and Taipei SANTA CLARA, Calif., Oct. 24, 2023 /PRNewswire/ — Expedera Inc, a leading provider of scalable Neural Processing Unit (NPU) semiconductor intellectual property (IP), today announced the opening of its newest R&D

AI Silicon IP Provider Expedera Opens R&D Office in Singapore Read More +

A Packet-based Architecture For Edge AI Inference

This blog post was originally published at Expedera’s website. It is reprinted here with the permission of Expedera. Despite significant improvements in throughput, edge AI accelerators (Neural Processing Units, or NPUs) are still often underutilized. Inefficient management of weights and activations leads to fewer available cores utilized for multiply-accumulate (MAC) operations. Edge AI applications frequently

A Packet-based Architecture For Edge AI Inference Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top