Edge AI and Vision Insights: August 16, 2023 Edition


Battery-powered Edge AI Sensing: A Case Study Implementing Low-power, Always-on CapabilityAvnet
The trend of pushing AI/ML capabilities to the edge brings design challenges around the need to combine high-performance computing (for AI/ML algorithms) with low power consumption (to enable battery-powered sensing systems). Designers are often faced with a choice between using an applications processor that lacks the required performance or exceeds the system power or cost budget, or using a GPU or NPU, which can yield power consumption in watts and battery life in minutes. Neither of these approaches is suitable for size-constrained, battery-powered, always-on edge AI/ML systems. Using a case study of Avnet’s new smart sensing RASynBoard, Peter Fenn, Director of the Advanced Applications Group at Avnet, explores architecture and component selection trade-offs to enable always-on sensing capability. He explains novel design techniques, from clock gating to power partitioning. Through this presentation, you’ll gain a better understanding of how you can add deep learning capabilities to your next design while reducing power to extend battery life and minimizing the size and cost of your smart product.

A Very Low-power Human-machine Interface Using ToF Sensors and Embedded AI7 Sensing Software
Human-machine interaction is essential for smart devices. But growing needs for low power consumption and privacy pose challenges to developers of human-machine interfaces (HMIs). Time-of-flight (ToF) sensors can be a good match for these requirements. In this presentation, Di Ai, Machine Learning Engineer at 7 Sensing Software, shows how the combination of ams OSRAM’s ToF sensors and 7 Sensing Software’s embedded AI technology can be applied to create very low power yet robust HMI solutions. While the ToF sensors’ very low power consumption and low resolution are advantageous for battery life and privacy, they are constraints for developing accurate and robust sensing applications. Making ToF sensors smarter by using embedded AI is the key to overcoming these challenges. The highly efficient 7 Sensing Software AI algorithms are small enough to be embedded in microcontrollers such as those used in sensor hubs, and they have been deployed in gesture-based touchless user interfaces and in laptops for smart wake-up/leave-lock.


Toward the Era of AI EverywhereDEEPX
Over the past decade, deep neural networks have proven they can solve a wide range of visual perception problems. Semiconductor companies have invested billions of dollars in developing edge AI processors for deep learning. And thousands of companies have invested in visual AI solutions. Yet, in most industries, visual AI solutions are not widely deployed at scale. Edge AI is mandatory in many applications, due to privacy, reliability, cost, and processing latency concerns. These applications require edge AI processor solutions that support smarter and more efficient SOTA AI algorithms, maintain GPU-like AI accuracy, are easy to use, and deliver high performance with low power consumption. So far, edge AI processors and tools have not satisfied these requirements. This has limited the adoption of edge AI. In this talk, Lokwon Kim, CEO of DEEPX, shares DEEPX’s vision of enabling edge AI everywhere and its strategy for enabling this. He shares key innovations that DEEPX has implemented to deliver extreme performance, with GPU-like AI accuracy and flexibility, at price and power consumption levels that will enable edge AI solutions to be deployed at scale across all industries.

Enabling Ultra-low Power Edge Inference and On-device LearningBrainChip
The AIoT industry is expected to reach $1T by 2030—but that will happen only if edge devices rapidly become more intelligent. In this presentation, Nandan Nayampally, Chief Marketing Officer at BrainChip, shows how BrainChip’s Akida IP solution enables improved edge ML accuracy and on-device learning with extreme energy efficiency. Akida is a fully digital, neuromorphic, event-based AI engine that offers unique on-device learning abilities, minimizing the need for cloud retraining. Nayampally demonstrates Akida’s compelling performance and extreme energy efficiency on complex models and explains how Akida executes spatial-temporal convolutions using innovative handling of 3D and 1D data. He also shows how Akida supports low-power implementations of vision transformers and introduces the Akida developer ecosystem, which enables both AI experts and newcomers to quickly deploy disruptive edge AI applications that weren’t possible before.


Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events


Intel Joins the PyTorch Foundation to Drive AI Everywhere

An Upcoming Webinar from e-Con Systems Explores How to Solve Imaging Challenges in Microscopy, while BrainChip and Edge Impulse’s Upcoming Webinar Offers a Neuromorphic Deep Dive into Next-gen Edge AI Solutions

CEVA Doubles Down on Generative AI with Its Enhanced NeuPro-M NPU IP Family

New AMD Radeon PRO W7000 Series Workstation Graphics Cards Deliver Advanced Technologies and High Performance for Mainstream Professional Developer Workflows

Hailo’s New SoC and PCIe Cards Expand the Hailo-8 AI Accelerator Portfolio

More News


Conservation X Labs Sentinel Smart Camera (Best Consumer Edge AI End Product)Conservation X Labs
Conservation X Labs’ Sentinel Smart Camera is the 2023 Edge AI and Vision Product of the Year Award winner in the Consumer Edge AI End Products category. The Sentinel Smart Camera is an AI-enabled field monitoring system that can help better understand and protect wildlife and the people with it in the field. Sentinel is the hardware and software base of a fully-integrated AI camera platform for wildlife conservation and field research. Traditionally, remote-camera solutions are challenged by harsh conditions, access to power, and data transmission, often making it difficult to access information in an actionable timeframe. Sentinel applies AI to modern sensors and connectivity to deploy a faster, longer-running, more effective option straight out of the box. Running onboard detection algorithms, Sentinel doesn’t just passively collect visual data, it can autonomously detect and address the greatest threats on the frontlines of the biodiversity crisis, including poaching and wildlife trafficking, invasive species, and endangered species. This robust technology gives conservationists real-time information on events in the wild and the ability to respond to these threats through smart, data-driven decisions.

Please see here for more information on Conservation X Labs’ Sentinel Smart Camera. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top