Summit 2025

“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips

Naitik Nakrani, Solution Architect Manager at eInfochips, presents the “Efficiently Registering Depth and RGB Images” tutorial at the May 2025 Embedded Vision Summit. As depth sensing and computer vision technologies evolve, integrating RGB and depth cameras has become crucial for reliable and precise scene perception. In this session, Nakrani presents… “Efficiently Registering Depth and RGB […]

“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips Read More +

“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,” a Presentation from Avassa and OnLogic

Carl Moberg, CTO of Avassa, and Zoie Rittling, Business Development Manager at OnLogic, co-present the “How Right-size and Future-proof a Container-first Edge AI Infrastructure” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Moberg and Rittling provide practical guidance on overcoming key challenges in deploying AI at the… “How to Right-size and Future-proof

“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,” a Presentation from Avassa and OnLogic Read More +

“Image Tokenization for Distributed Neural Cascades,” a Presentation from Google and VeriSilicon

Derek Chow, Software Engineer at Google, and Shang-Hung Lin, Vice President of NPU Technology at VeriSilicon, co-present the “Image Tokenization for Distributed Neural Cascades” tutorial at the May 2025 Embedded Vision Summit. Multimodal LLMs promise to bring exciting new abilities to devices! As we see foundational models become more capable,… “Image Tokenization for Distributed Neural

“Image Tokenization for Distributed Neural Cascades,” a Presentation from Google and VeriSilicon Read More +

“Key Requirements to Successfully Implement Generative AI in Edge Devices—Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP,” a Presentation from Synopsys

Gordon Cooper, Principal Product Manager at Synopsys, presents the “Key Requirements to Successfully Implement Generative AI in Edge Devices—Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP” tutorial at the May 2025 Embedded Vision Summit. In this talk, Cooper discusses emerging trends in generative AI for edge devices and… “Key Requirements to Successfully Implement

“Key Requirements to Successfully Implement Generative AI in Edge Devices—Optimized Mapping to the Enhanced NPX6 Neural Processing Unit IP,” a Presentation from Synopsys Read More +

“Bridging the Gap: Streamlining the Process of Deploying AI onto Processors,” a Presentation from SqueezeBits

Taesu Kim, Chief Technology Officer at SqueezeBits, presents the “Bridging the Gap: Streamlining the Process of Deploying AI onto Processors” tutorial at the May 2025 Embedded Vision Summit. Large language models (LLMs) often demand hand-coded conversion scripts for deployment on each distinct processor-specific software stack—a process that’s time-consuming and prone… “Bridging the Gap: Streamlining the

“Bridging the Gap: Streamlining the Process of Deploying AI onto Processors,” a Presentation from SqueezeBits Read More +

“From Enterprise to Makers: Driving Vision AI Innovation at the Extreme Edge,” a Presentation from Sony Semiconductor Solutions

Amir Servi, Edge Deep Learning Product Manager at Sony Semiconductor Solutions, presents the “From Enterprise to Makers: Driving Vision AI Innovation at the Extreme Edge” tutorial at the May 2025 Embedded Vision Summit. Sony’s unique integrated sensor-processor technology is enabling ultra-efficient intelligence directly at the image source, transforming vision AI… “From Enterprise to Makers: Driving

“From Enterprise to Makers: Driving Vision AI Innovation at the Extreme Edge,” a Presentation from Sony Semiconductor Solutions Read More +

“Addressing Evolving AI Model Challenges Through Memory and Storage,” a Presentation from Micron

Wil Florentino, Senior Segment Marketing Manager at Micron, presents the “Addressing Evolving AI Model Challenges Through Memory and Storage” tutorial at the May 2025 Embedded Vision Summit. In the fast-changing world of artificial intelligence, the industry is deploying more AI compute at the edge. But the growing diversity and data… “Addressing Evolving AI Model Challenges

“Addressing Evolving AI Model Challenges Through Memory and Storage,” a Presentation from Micron Read More +

“Why It’s Critical to Have an Integrated Development Methodology for Edge AI,” a Presentation from Lattice Semiconductor

Sreepada Hegade, Director of ML Systems and Software at Lattice Semiconductor, presents the “Why It’s Critical to Have an Integrated Development Methodology for Edge AI” tutorial at the May 2025 Embedded Vision Summit. The deployment of neural networks near sensors brings well-known advantages such as lower latency, privacy and reduced… “Why It’s Critical to Have

“Why It’s Critical to Have an Integrated Development Methodology for Edge AI,” a Presentation from Lattice Semiconductor Read More +

“Solving Tomorrow’s AI Problems Today with Cadence’s Newest Processor,” a Presentation from Cadence

Amol Borkar, Product Marketing Director at Cadence, presents the “Solving Tomorrow’s AI Problems Today with Cadence’s Newest Processor” tutorial at the May 2025 Embedded Vision Summit. Artificial intelligence is rapidly integrating into every aspect of technology. While the neural processing unit (NPU) often receives the majority of the spotlight as… “Solving Tomorrow’s AI Problems Today

“Solving Tomorrow’s AI Problems Today with Cadence’s Newest Processor,” a Presentation from Cadence Read More +

“State-space Models vs. Transformers for Ultra-low-power Edge AI,” a Presentation from BrainChip

Tony Lewis, Chief Technology Officer at BrainChip, presents the “State-space Models vs. Transformers for Ultra-low-power Edge AI” tutorial at the May 2025 Embedded Vision Summit. At the embedded edge, choices of language model architectures have profound implications on the ability to meet demanding performance, latency and energy efficiency requirements. In… “State-space Models vs. Transformers for

“State-space Models vs. Transformers for Ultra-low-power Edge AI,” a Presentation from BrainChip Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top