Videos

“Using a Neural Processor for Always-sensing Cameras,” a Presentation from Expedera

Sharad Chole, Chief Scientist and Co-founder of Expedera, presents the “Using a Neural Processor for Always-sensing Cameras” tutorial at the May 2023 Embedded Vision Summit. Always-sensing cameras are becoming a common AI-enabled feature of consumer devices, much like the always-listening Siri or Google assistants. They can enable a more natural and seamless user experience, such […]

“Using a Neural Processor for Always-sensing Cameras,” a Presentation from Expedera Read More +

“A New, Open-standards-based, Open-source Programming Model for All Accelerators,” a Presentation from Codeplay Software

Charles Macfarlane, Chief Business Officer at Codeplay Software, presents the “New, Open-standards-based, Open-source Programming Model for All Accelerators” tutorial at the May 2023 Embedded Vision Summit. As demand for AI grows, developers are attempting to squeeze more and more performance from accelerators. Ideally, developers would choose the accelerators best suited to their applications. Unfortunately, today

“A New, Open-standards-based, Open-source Programming Model for All Accelerators,” a Presentation from Codeplay Software Read More +

“Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using CEVA’s Parallel Processing Framework,” a Presentation from CEVA

Rami Drucker, Machine Learning Software Architect at CEVA, presents the “Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using CEVA’s Parallel Processing Framework” tutorial at the May 2023 Embedded Vision Summit. Next-generation AI and computer vision applications for autonomous vehicles, cameras, drones and robots require higher-than-ever computing power. Often, the most efficient way

“Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using CEVA’s Parallel Processing Framework,” a Presentation from CEVA Read More +

“Streamlining Embedded Vision Development with Smart Vision Components,” a Presentation from Basler

Selena Schwarm, Team Lead for Global Partner Management at Basler, presents the “Streamlining Embedded Vision Development with Smart Vision Components” tutorial at the May 2023 Embedded Vision Summit. The evolution of embedded vision and imaging technologies is enabling the development of powerful applications that would not have been practical previously. The possibilities seem to be

“Streamlining Embedded Vision Development with Smart Vision Components,” a Presentation from Basler Read More +

“A Very Low-power Human-machine Interface Using ToF Sensors and Embedded AI,” a Presentation from 7 Sensing Software

Di Ai, Machine Learning Engineer at 7 Sensing Software, presents the “Very Low-power Human-machine Interface Using ToF Sensors and Embedded AI” tutorial at the May 2023 Embedded Vision Summit. Human-machine interaction is essential for smart devices. But growing needs for low power consumption and privacy pose challenges to developers of human-machine interfaces (HMIs). Time-of-flight (ToF)

“A Very Low-power Human-machine Interface Using ToF Sensors and Embedded AI,” a Presentation from 7 Sensing Software Read More +

“AI-ISP: Adding Real-time AI Functionality to Image Signal Processing with Reduced Memory Footprint and Processing Latency,” a Presentation from VeriSilicon

Mankit Lo, Chief Architect for NPU IP Development at VeriSilicon, presents the “AI-ISP: Adding Real-time AI Functionality to Image Signal Processing with Reduced Memory Footprint and Processing Latency” tutorial at the May 2023 Embedded Vision Summit. The AI-ISP IP product from VeriSilicon is a revolutionary solution that adds AI functionality to image signal processing (ISP)

“AI-ISP: Adding Real-time AI Functionality to Image Signal Processing with Reduced Memory Footprint and Processing Latency,” a Presentation from VeriSilicon Read More +

“Developing an Efficient Automotive Augmented Reality Solution Using Teacher-student Learning and Sprints,” a Presentation from STRADVISION

Jack Sim, CTO of STRADVISION, presents the “Developing an Efficient Automotive Augmented Reality Solution Using Teacher-student Learning and Sprints” tutorial at the May 2023 Embedded Vision Summit. ImmersiView is a deep learning–based augmented reality solution for automotive safety. It uses a head-up display to draw a driver’s attention to important objects. The development of such

“Developing an Efficient Automotive Augmented Reality Solution Using Teacher-student Learning and Sprints,” a Presentation from STRADVISION Read More +

“Introducing the i.MX 93: Your “Go-to” Processor for Embedded Vision,” a Presentation from NXP Semiconductors

Srikanth Jagannathan, Product Manager at NXP Semiconductors, presents the “Introducing the i.MX 93: Your “Go-to” Processor for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit. In this presentation, you’ll learn all about NXP’s just-launched i.MX 93 applications processor family. The i.MX 93 is built with NXP’s innovative Energy Flex architecture, which delivers high

“Introducing the i.MX 93: Your “Go-to” Processor for Embedded Vision,” a Presentation from NXP Semiconductors Read More +

“How to Select, Train, Optimize and Deploy Edge Vision AI Models in Three Days,” a Presentation from Nota AI

Steven Kim, Co-CEO of Nota America, presents the “How to Select, Train, Optimize and Deploy Edge Vision AI Models in Three Days” tutorial at the May 2023 Embedded Vision Summit. NetsPresso, as explained by Kim in this presentation, is a development pipeline that enables developers to build, optimize and deploy vision AI models faster and

“How to Select, Train, Optimize and Deploy Edge Vision AI Models in Three Days,” a Presentation from Nota AI Read More +

“Optimized Image Processing for Automotive Image Sensors with Novel Color Filter Arrays,” a Presentation from Nextchip

Young-Jun Yoo, Vice President of the Automotive Business and Operations Unit at Nextchip, presents the “Optimized Image Processing for Automotive Image Sensors with Novel Color Filter Arrays” tutorial at the May 2023 Embedded Vision Summit. Traditionally, image sensors have been optimized to produce images that look natural to humans. For images consumed by algorithms, what

“Optimized Image Processing for Automotive Image Sensors with Novel Color Filter Arrays,” a Presentation from Nextchip Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top