Summit 2025

“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation from FotoNation

Petronel Bigioi, CEO of FotoNation, presents the “MPU+: A Transformative Solution for Next-Gen AI at the Edge” tutorial at the May 2025 Embedded Vision Summit. In this talk, Bigioi introduces MPU+, a novel programmable, customizable low-power platform for real-time, localized intelligence at the edge. The platform includes an AI-augmented image… “MPU+: A Transformative Solution for […]

“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation from FotoNation Read More +

“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentation from Expedera

Ramteja Tadishetti, Principal Software Engineer at Expedera, presents the “Evolving Inference Processor Software Stacks to Support LLMs” tutorial at the May 2025 Embedded Vision Summit. As large language models (LLMs) and vision-language models (VLMs) have quickly become important for edge applications from smartphones to automobiles, chipmakers and IP providers have… “Evolving Inference Processor Software Stacks

“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentation from Expedera Read More +

“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips

Naitik Nakrani, Solution Architect Manager at eInfochips, presents the “Efficiently Registering Depth and RGB Images” tutorial at the May 2025 Embedded Vision Summit. As depth sensing and computer vision technologies evolve, integrating RGB and depth cameras has become crucial for reliable and precise scene perception. In this session, Nakrani presents… “Efficiently Registering Depth and RGB

“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips Read More +

“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,” a Presentation from Avassa and OnLogic

Carl Moberg, CTO of Avassa, and Zoie Rittling, Business Development Manager at OnLogic, co-present the “How Right-size and Future-proof a Container-first Edge AI Infrastructure” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Moberg and Rittling provide practical guidance on overcoming key challenges in deploying AI at the… “How to Right-size and Future-proof

“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,” a Presentation from Avassa and OnLogic Read More +

“Image Tokenization for Distributed Neural Cascades,” a Presentation from Google and VeriSilicon

Derek Chow, Software Engineer at Google, and Shang-Hung Lin, Vice President of NPU Technology at VeriSilicon, co-present the “Image Tokenization for Distributed Neural Cascades” tutorial at the May 2025 Embedded Vision Summit. Multimodal LLMs promise to bring exciting new abilities to devices! As we see foundational models become more capable,… “Image Tokenization for Distributed Neural

“Image Tokenization for Distributed Neural Cascades,” a Presentation from Google and VeriSilicon Read More +

“Key Requirements to Successfully Implement Generative AI in Edge Devices—Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP,” a Presentation from Synopsys

Gordon Cooper, Principal Product Manager at Synopsys, presents the “Key Requirements to Successfully Implement Generative AI in Edge Devices—Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP” tutorial at the May 2025 Embedded Vision Summit. In this talk, Cooper discusses emerging trends in generative AI for edge devices and… “Key Requirements to Successfully Implement

“Key Requirements to Successfully Implement Generative AI in Edge Devices—Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP,” a Presentation from Synopsys Read More +

“Bridging the Gap: Streamlining the Process of Deploying AI onto Processors,” a Presentation from SqueezeBits

Taesu Kim, Chief Technology Officer at SqueezeBits, presents the “Bridging the Gap: Streamlining the Process of Deploying AI onto Processors” tutorial at the May 2025 Embedded Vision Summit. Large language models (LLMs) often demand hand-coded conversion scripts for deployment on each distinct processor-specific software stack—a process that’s time-consuming and prone… “Bridging the Gap: Streamlining the

“Bridging the Gap: Streamlining the Process of Deploying AI onto Processors,” a Presentation from SqueezeBits Read More +

“From Enterprise to Makers: Driving Vision AI Innovation at the Extreme Edge,” a Presentation from Sony Semiconductor Solutions

Amir Servi, Edge Deep Learning Product Manager at Sony Semiconductor Solutions, presents the “From Enterprise to Makers: Driving Vision AI Innovation at the Extreme Edge” tutorial at the May 2025 Embedded Vision Summit. Sony’s unique integrated sensor-processor technology is enabling ultra-efficient intelligence directly at the image source, transforming vision AI… “From Enterprise to Makers: Driving

“From Enterprise to Makers: Driving Vision AI Innovation at the Extreme Edge,” a Presentation from Sony Semiconductor Solutions Read More +

“Addressing Evolving AI Model Challenges Through Memory and Storage,” a Presentation from Micron

Wil Florentino, Senior Segment Marketing Manager at Micron, presents the “Addressing Evolving AI Model Challenges Through Memory and Storage” tutorial at the May 2025 Embedded Vision Summit. In the fast-changing world of artificial intelligence, the industry is deploying more AI compute at the edge. But the growing diversity and data… “Addressing Evolving AI Model Challenges

“Addressing Evolving AI Model Challenges Through Memory and Storage,” a Presentation from Micron Read More +

“Why It’s Critical to Have an Integrated Development Methodology for Edge AI,” a Presentation from Lattice Semiconductor

Sreepada Hegade, Director of ML Systems and Software at Lattice Semiconductor, presents the “Why It’s Critical to Have an Integrated Development Methodology for Edge AI” tutorial at the May 2025 Embedded Vision Summit. The deployment of neural networks near sensors brings well-known advantages such as lower latency, privacy and reduced… “Why It’s Critical to Have

“Why It’s Critical to Have an Integrated Development Methodology for Edge AI,” a Presentation from Lattice Semiconductor Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top