Edge AI and Vision Insights: September 17, 2025

LETTER FROM THE EDITOR

Dear Colleague,

Next Tuesday, September 23, 2025 at 9 am PT, the Yole Group will present the free webinar “Infrared Imaging: Technologies, Trends, Opportunities and Forecasts” in partnership with the Edge AI and Vision Alliance. Infrared (IR) cameras are increasingly finding adoption in a diversity of applications, as an adjunct or alternative to visible light cameras. Their ability to detect and measure energy below the visible light region of the electromagnetic spectrum makes them useful as thermal imagers, sensing objects (and their movements) whose temperatures differ from that of the ambient environment. Infrared light can also penetrate materials which visible light cannot, such as dust, fog, smoke, and thin walls.

Combining visible light and infrared data results in a richer multispectral understanding of a scene. The infrared spectrum subdivides into short-wave (SWIR), mid-wave (MWIR) and long-wave (LWIR) regions, each with unique sensor technologies. Both uncooled and cooled sensor subsystems, each with its own tradeoffs, contend for market acceptance.

This webinar, presented by Axel Clouet, Ph.D., senior market and technology analyst for imaging at the Yole Group, will discuss the history, current status and technology and market trends in the IR imaging ecosystem, including comparisons with alternative imaging approaches. Clouet will cover both high-end applications such as military and surveillance and high-volume opportunities like automotive and consumer electronics, and will share Yole’s latest market forecasts. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

ADVANCING OBJECT RECOGNITION AT THE EDGE

Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing

What occurs between the moment a commercial fishing vessel departs from shore and its return? How sustainable is its catch? Current frameworks often rely on self-reporting, which can result in errors or misrepresentation, particularly regarding bycatch. By equipping fishing vessels with on-deck cameras, leveraging edge devices for tracking and counting and transmitting predictions to the cloud, we can create a daily risk index. This index can be used to promptly alert stakeholders to suspicious activities. It integrates data from the computer vision system alongside metadata such as GPS coordinates, vessel speed and the captainʼs log. In this 2025 Embedded Vision Summit talk, Alicia Schandy Wood, Machine Learning Engineer at Tryolabs, and Vienna Saccomanno, Senior Scientist at The Nature Conservancy, outline the development and deployment of an AI- and vision-based solution for monitoring commercial fishing, including challenges encountered and lessons learned from the initial deployment.

Visual Search: Fine-grained Recognition with Embedding Models for the Edge

In the domain of AI vision, we have seen an explosion of models that can reliably detect objects of various types, from people to license plates. While these models are impressive, in real-world applications we often need to differentiate between a large number of custom items. For example, in addition to knowing that there is a car, you may want to know the exact make and model of that car. For these sorts of tasks, what you really want is a visual search that can identify an object from a catalog without requiring a new model to be trained when categories are added. In this 2025 Embedded Vision Summit presentation, Omid Azizi, Co-Founder of Gimlet Labs, describes how embedding models can be used to perform a visual search in such applications. He presents how to use and fine-tune these models, including tips on how to train an embedding model such that new objects can be added without requiring retraining of the model.

INCREASING VISION MODEL ACCURACY

Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models at Runtime

As humans, when we look at a scene our first impressions are sometimes wrong; we need to take a second look, to squint and reassess. Squinting enables us to focus our attention on the subject we are investigating and often clarifies and corrects our initial assumptions. Computer vision algorithms, including those enabled through artificial intelligence and machine learning pipelines, can also be made to squint. As the role of AI/ML algorithms in automation matures, the cost of mistakes will increase dramatically. Consider use cases in the healthcare industry, cancer diagnosis and research and autonomous systems. In this 2025 Embedded Vision Summit presentation, Ken Wenger, Chief Technology Officer at Squint AI, shows how the Squint Insights Studio platform uses explainable AI to add context and reasoning to vision model decisions, enabling the user to build vision pipelines that continuously assess and revise vision model predictions in production environments.

Strategies for Image Dataset Curation from High-volume Industrial IoT Data

In industrial supply chain and logistics applications, edge IoT devices capture data continuously, generating massive amounts of data. For embedded vision systems, managing the sheer volume of images and metadata can be challenging. Selecting a diverse subset of high-quality data is crucial for effective modeling and analysis. In this 2025 Embedded Vision Summit talk, Dan Bricarello, Computer Vision Lead, and Apurva Godghase, Senior Computer Vision Engineer, both of Brambles, share a comprehensive method for selecting relevant images from an extensive dataset to create a high-quality image database that enables building and monitoring computer vision and machine learning models. This systematic approach not only enhances the efficiency of data management in industrial IoT applications but also improves the generalizability and accuracy of computer vision models.

UPCOMING INDUSTRY EVENTS

Infrared Imaging: Technologies, Trends, Opportunities and Forecasts – Yole Group Webinar: September 23, 2025, 9:00 am PT

Embedded Vision Summit: May 11-13, 2026, Santa Clara, California

More Events

FEATURED NEWS

Axelera AI Boosts LLMs at the Edge by 2x with Metis M.2 Max Introduction

Smarter, Faster, More Personal AI Delivered on Consumer Devices with Arm’s New Lumex CSS Platform

Qualcomm and BMW Group Unveil Groundbreaking Automated Driving System with Jointly Developed Software Stack

Andes Technology Announces D23-SE, a Functional Safety RISC-V Core with DCLS and Split-lock for ASIL-B/D Automotive Applications

CLIKA Raises Seed Round to Accelerate AI Deployment Everywhere

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE



ENERZAi Optimium (Best Edge AI Development Platform)

ENERZAi’s Optimium is the 2025 Edge AI and Vision Product of the Year Award Winner in the Edge AI Development Platform category. Optimium is a software development platform designed to overcome the functional limitations of existing engines and facilitate the convenient deployment of edge AI models with optimal performance. It enhances the inference speed of AI models on target hardware without sacrificing accuracy and simplifies deployment across various hardware using a single tool. Utilizing our proprietary optimization techniques, Optimium has shown superior performance compared to existing engines. AI models deployed with Optimium achieved significantly faster inference speeds than those deployed with traditional engines on various hardware platforms, including Arm, Intel and AMD. In fact, it is the fastest inference engine for deploying computer vision models on CPUs. By performing hardware-aware inference optimization tailored to each device, Optimium is the ideal solution for implementing high-performing and power-efficient edge AI applications on resource-constrained edge devices.

Optimium aims to accelerate AI model inference on target hardware while maintaining accuracy and enabling seamless deployment across different hardware platforms with a single tool. To achieve high performance and flexibility, we developed Nadya, our proprietary metaprogramming language. Nadya plays a crucial role in model tuning, which involves finding the optimal parameter combinations for each layer of an AI model. Specifically, it generates code based on various parameter combinations through metaprogramming and compiles the code for optimized execution on the target hardware. Unlike programming languages commonly used for high-performance computing, such as C, C++, and Rust, which require manual coding tailored to the target hardware, Nadya allows programmers to automatically generate compatible code for various types of hardware from a single implementation. This metaprogramming feature enables convenient deployment across diverse hardware platforms with one tool. In contrast, existing inference engines often require different tools for each target hardware, which complicates and slows down AI model deployment. With Optimium, the time and costs associated with AI model deployment can be significantly reduced.

Please see here for more information on ENERZAi’s Optimium. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top