Edge AI and Vision Insights: November 8, 2023 Edition

Dear Colleague,2024 Embedded Vision Summit Registration

I’m excited to announce that registration is now open for the 2024 Embedded Vision Summit, coming up May 21-23 in Santa Clara, California! It’s the premier conference and tradeshow for innovators incorporating computer vision and visual or perceptual AI in products.

This year we’ll be offering a program packed with:

  • 100+ expert speakers
  • 80+ technology exhibits
  • 100s of demos, and
  • 1,100+ represented companies

all designed to cover the most important technical and business aspects of practical computer vision, deep learning and perceptual AI. Register by December 31st and you can save 35%; trust me, that’s the best price we’ll offer!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


Open Standards Unleash Hardware Acceleration for Embedded VisionKhronos Group
Offloading visual processing to a hardware accelerator has many advantages for embedded vision systems. Decoupling hardware and software removes barriers to innovation and shortens time to market. Using hardware acceleration, embedded vision system manufacturers can integrate and deploy new components more easily, provide cross-generation reusability and facilitate field upgradability. Embedded system hardware acceleration is made possible by open standards for acceleration and interfacing, which reduce costs and lower the barriers to advanced techniques such as inferencing and vision acceleration. In this presentation, Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, explains how hardware accelerators can help offload vision processing and inferencing to speed development. He gives an overview of the Khronos Group family of open standards for programming and deploying accelerated inferencing and embedded vision and shares the latest progress from the Kamaros API Working Group, which is dedicated to the development of an open-standard camera system API.

Your “Go-To” Processor for Embedded VisionNXP Semiconductors
In this presentation, you’ll learn all about NXP’s just-launched i.MX 93 applications processor family. The i.MX 93 is built with NXP’s innovative Energy Flex architecture, which delivers high performance, low power consumption and incredible versatility at an affordable price. Srikanth Jagannathan, Product Manager at NXP Semiconductors, introduces the i.MX 93 processing cores, including two high-performance Arm Cortex-A55 CPUs, a Cortex-M33 for low-power, real-time operation, and an Arm Ethos U-65 NPU that enables high-performance, cost-effective and energy-efficient ML applications. Jagannathan also shares NPU architecture details and benchmark results, and explores the i.MX 93 processors’ rich set of on chip peripherals, such as MIPI-CSI, MIPI-DSI, Ethernet and USB. And he introduces the eIQ software toolkit, which allows developers to create complete system-level applications with ease. Jagannathan shows how the rich feature set of the i.MX 93 processors supports demanding embedded vision applications through examples, such as a driver monitoring system. And he highlights NXP’s unique commitment to industrial-grade quality, product longevity and customer support.


Modernizing the Development of AI-based IoT DevicesSony Midokura
IoT device development has traditionally relied on a monolithic approach, with all firmware developed by a single vendor using a rigid waterfall model, typically in C, and infrequently updated. This paradigm is insufficient. For AI-enabled IoT devices to reach their potential, developers must be able to easily program and update low-cost, low-power AI-capable sensors, such as Sony’s IMX500. In this talk, Dan Mihai Dumitriu, Chief Technology Officer of Midokura, a Sony Group Company, presents Wedge—a combination of runtime, device agent and cloud service. Wedge automates software life-cycle management for devices, provides isolation and enables agile development. Wedge is based on WebAssembly, a binary instruction format for a stack-based virtual machine. Wedge’s runtime and memory overhead are within 2x of native C/C++ code, and it supports languages such as Python and AssemblyScript. Dumitriu also presents the vision sensing pipeline, a sensor data processing layer developed on top of Wedge, which is accessible via REST API and a visual interface.

Streamlining Embedded Vision Development with Smart Vision ComponentsBasler
The evolution of embedded vision and imaging technologies is enabling the development of powerful applications that would not have been practical previously. The possibilities seem to be endless. Yet, developing embedded vision solutions is challenging. There are many hardware and software components that must be integrated, and the associated complexity can be daunting. For example, there are many camera interfaces (MIPI, USB, LVDS), processing engines (GPU, NPU, ISP) and algorithms (image processing, classical computer vision, deep learning) to be selected, configured and integrated. In this talk, Selena Schwarm, Team Lead for Global Partner Management at Basler, introduces Basler’s latest optimized software architecture, along with the company’s compatible hardware and software components, which together enable seamless development and smooth deployment of state-of-the-art, production-ready embedded vision applications.


Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events


Cadence Expands Its Tensilica IP Portfolio with New HiFi and Vision DSPs for Pervasive Intelligence and Edge AI Inference

Visionary.ai’s Latest Software ISP Solution Brings Night Vision to Mobile Systems

New Qualcomm’s SoCs for Mobile Communications and Computing Platforms Support On-chip Accelerated Generative AI

IDS’ Upcoming InsightImaging Online Event Covers the Latest Image Processing Developments in 2D, 3D and AI

Synaptics Introduces the DVF120 AI SoC Optimized for Advanced Enterprise Unified Communication and Collaboration Products

More News


Conservation X Labs Sentinel Smart Camera (Best Consumer Edge AI End Product)Conservation X Labs
Conservation X Labs’ Sentinel Smart Camera is the 2023 Edge AI and Vision Product of the Year Award winner in the Consumer Edge AI End Products category. The Sentinel Smart Camera is an AI-enabled field monitoring system that can help better understand and protect wildlife and the people with it in the field. Sentinel is the hardware and software base of a fully-integrated AI camera platform for wildlife conservation and field research. Traditionally, remote-camera solutions are challenged by harsh conditions, access to power, and data transmission, often making it difficult to access information in an actionable timeframe. Sentinel applies AI to modern sensors and connectivity to deploy a faster, longer-running, more effective option straight out of the box. Running onboard detection algorithms, Sentinel doesn’t just passively collect visual data, it can autonomously detect and address the greatest threats on the frontlines of the biodiversity crisis, including poaching and wildlife trafficking, invasive species, and endangered species. This robust technology gives conservationists real-time information on events in the wild and the ability to respond to these threats through smart, data-driven decisions.

Please see here for more information on Conservation X Labs’ Sentinel Smart Camera. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top