Expedera

“Challenges and Solutions of Moving Vision LLMs to the Edge,” a Presentation from Expedera

Costas Calamvokis, Distinguished Engineer at Expedera, presents the “Challenges and Solutions of Moving Vision LLMs to the Edge” tutorial at the May 2024 Embedded Vision Summit. OEMs, brands and cloud providers want to move LLMs to the edge, especially for vision applications. What are the benefits and challenges of doing… “Challenges and Solutions of Moving […]

“Challenges and Solutions of Moving Vision LLMs to the Edge,” a Presentation from Expedera Read More +

Expedera’s Packet-based AI Processing Architecture: An Introduction

This blog post was originally published at Expedera’s website. It is reprinted here with the permission of Expedera. Most NPUs available today are not actually optimized for AI processing. Rather, they are variations of former CPU, GPU, or DSP designs. Every neural network has varying processing and memory requirements and offers unique processing challenges that

Expedera’s Packet-based AI Processing Architecture: An Introduction Read More +

indie Semiconductor Announces Strategic Investment in AI Processor Leader Expedera

Partnership Capitalizes on Expedera’s Breakthrough AI Capabilities in Support of indie’s ADAS Portfolio Leverages indie’s Multi-modal Sensing Technology and Expedera’s Scalable NPU IP Yields Class-leading Edge AI Performance at Ultra-low Power and Low Latency March 20, 2024 04:05 PM Eastern Daylight Time–ALISO VIEJO, Calif. & SANTA CLARA, Calif.–(BUSINESS WIRE)–indie Semiconductor, Inc. (Nasdaq: INDI), an Autotech

indie Semiconductor Announces Strategic Investment in AI Processor Leader Expedera Read More +

Expedera NPUs Run Large Language Models Natively on Edge Devices

Highlights Expedera NPU IP adds native support for LLMs, including stable diffusion Origin NPUs deliver the power-performance profile needed to run LLMs on edge and portable devices SANTA CLARA, Calif., Jan. 8, 2024 /PRNewswire/ — Expedera, Inc, a leading provider of customizable Neural Processing Unit (NPU) semiconductor intellectual property (IP), announced today that its Origin

Expedera NPUs Run Large Language Models Natively on Edge Devices Read More +

AI Silicon IP Provider Expedera Opens R&D Office in Singapore

Highlights Expedera opens a new Singapore R&D center The company’s fifth development center, with additional locations in Santa Clara (USA), Bath (UK), Shanghai, and Taipei SANTA CLARA, Calif., Oct. 24, 2023 /PRNewswire/ — Expedera Inc, a leading provider of scalable Neural Processing Unit (NPU) semiconductor intellectual property (IP), today announced the opening of its newest R&D

AI Silicon IP Provider Expedera Opens R&D Office in Singapore Read More +

A Packet-based Architecture For Edge AI Inference

This blog post was originally published at Expedera’s website. It is reprinted here with the permission of Expedera. Despite significant improvements in throughput, edge AI accelerators (Neural Processing Units, or NPUs) are still often underutilized. Inefficient management of weights and activations leads to fewer available cores utilized for multiply-accumulate (MAC) operations. Edge AI applications frequently

A Packet-based Architecture For Edge AI Inference Read More +

A Buyers Guide to an NPU

This blog post was originally published at Expedera’s website. It is reprinted here with the permission of Expedera. Choosing the right inference NPU (Neural Processing Unit) is a critical decision for a chip architect. There’s a lot at stake because the AI landscape constantly changes, and the choices will impact overall product cost, performance, and

A Buyers Guide to an NPU Read More +

Can Compute-in-memory Bring New Benefits To Artificial Intelligence Inference?

This blog post was originally published at Expedera’s website. It is reprinted here with the permission of Expedera. Compute-in-memory (CIM) is not necessarily an Artificial Intelligence (AI) solution; rather, it is a memory management solution. CIM could bring advantages to AI processing by speeding up the multiplication operation at the heart of AI model execution.

Can Compute-in-memory Bring New Benefits To Artificial Intelligence Inference? Read More +

Expedera Announces LittleNPU AI Processors for Always-sensing Camera Applications

Highlights Specialized NPU IP makes it easier for device makers to implement feature-rich, always-sensing camera systems. A dedicated processing solution addresses the power and privacy concerns of always-sensing camera deployments. Santa Clara, California, July 18, 2023—Expedera Inc, a leading provider of scalable Neural Processing Unit (NPU) semiconductor intellectual property (IP), today announced the availability of

Expedera Announces LittleNPU AI Processors for Always-sensing Camera Applications Read More +

Expedera Expands Global Footprint with Its First European Development Center

Highlights Expedera opens its first European regional engineering development center in the UK The company’s fourth development center focused on edge AI inference with locations in Santa Clara (USA), Bath (UK), Shanghai and Taipei Santa Clara, California, June 27, 2023—Expedera Inc, a leading provider of scalable Neural Processing Unit (NPU) semiconductor intellectual property (IP), today

Expedera Expands Global Footprint with Its First European Development Center Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top