Summit 2025

“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-cost Hardware,” a Presentation from Useful Sensors

Pete Warden, CEO of Useful Sensors, presents the “Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-cost Hardware” tutorial at the May 2025 Embedded Vision Summit. In this talk, Warden presents Moonshine, a speech-to-text model that outperforms OpenAI’s Whisper by a factor of five in terms of speed.… “Voice Interfaces on a Budget: […]

“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-cost Hardware,” a Presentation from Useful Sensors Read More +

“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a Presentation from Tryolabs and the Nature Conservancy

Alicia Schandy Wood, Machine Learning Engineer at Tryolabs, and Vienna Saccomanno, Senior Scientist at The Nature Conservancy, co-present the “Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing” tutorial at the May 2025 Embedded Vision Summit. What occurs between the moment a commercial fishing vessel departs from shore and… “Computer Vision at Sea: Automated

“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a Presentation from Tryolabs and the Nature Conservancy Read More +

“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models at Runtime,” a Presentation from Squint AI

Ken Wenger, Chief Technology Officer at Squint AI, presents the “Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models at Runtime” tutorial at the May 2025 Embedded Vision Summit. As humans, when we look at a scene our first impressions are sometimes wrong; we need to take a second… “Squinting Vision Pipelines: Detecting and

“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models at Runtime,” a Presentation from Squint AI Read More +

“ONNX and Python to C++: State-of-the-art Graph Compilation,” a Presentation from Quadric

Nigel Drego, Co-founder and Chief Technology Officer at Quadric, presents the “ONNX and Python to C++: State-of-the-art Graph Compilation” tutorial at the May 2025 Embedded Vision Summit. Quadric’s Chimera general-purpose neural processor executes complete AI/ML graphs—all layers, including pre- and post-processing functions traditionally run on separate DSP processors. To enable… “ONNX and Python to C++:

“ONNX and Python to C++: State-of-the-art Graph Compilation,” a Presentation from Quadric Read More +

“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effective Solutions,” a Presentation from Plainsight Technologies

Kit Merker, CEO of Plainsight Technologies, presents the “Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effective Solutions” tutorial at the May 2025 Embedded Vision Summit. Many computer vision projects reach proof of concept but stall before production due to high costs, deployment challenges and infrastructure complexity. This presentation… “Beyond the Demo: Turning Computer

“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effective Solutions,” a Presentation from Plainsight Technologies Read More +

“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, TensorFlow and Numpy,” a Presentation from OpenMV

Kwabena Agyeman, President of OpenMV, presents the “Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, TensorFlow and Numpy” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Agyeman introduces the OpenMV AE3 and OpenMV N6 low-power, high-performance embedded machine vision cameras, which are 200x better than the… “Running Accelerated CNNs on Low-power

“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, TensorFlow and Numpy,” a Presentation from OpenMV Read More +

“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accelerators,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of AI ML Strategy and Technologies for Edge Processing at NXP Semiconductors, presents the “Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accelerators” tutorial at the May 2025 Embedded Vision Summit. The integration of discrete AI accelerators with edge processors is poised to revolutionize… “Scaling i.MX Applications Processors’ Native

“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accelerators,” a Presentation from NXP Semiconductors Read More +

“A Re-imagination of Embedded Vision System Design,” a Presentation from Imagination Technologies

Dennis Laudick, Vice President of Product Management and Marketing at Imagination Technologies, presents the “A Re-imagination of Embedded Vision System Design” tutorial at the May 2025 Embedded Vision Summit. Embedded vision applications, with their demand for ever more processing power, have been driving up the size and complexity of edge… “A Re-imagination of Embedded Vision

“A Re-imagination of Embedded Vision System Design,” a Presentation from Imagination Technologies Read More +

“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation from FotoNation

Petronel Bigioi, CEO of FotoNation, presents the “MPU+: A Transformative Solution for Next-Gen AI at the Edge” tutorial at the May 2025 Embedded Vision Summit. In this talk, Bigioi introduces MPU+, a novel programmable, customizable low-power platform for real-time, localized intelligence at the edge. The platform includes an AI-augmented image… “MPU+: A Transformative Solution for

“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation from FotoNation Read More +

“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentation from Expedera

Ramteja Tadishetti, Principal Software Engineer at Expedera, presents the “Evolving Inference Processor Software Stacks to Support LLMs” tutorial at the May 2025 Embedded Vision Summit. As large language models (LLMs) and vision-language models (VLMs) have quickly become important for edge applications from smartphones to automobiles, chipmakers and IP providers have… “Evolving Inference Processor Software Stacks

“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentation from Expedera Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top