Edge AI and Vision Insights: January 10, 2024 Edition

Dear Colleague,Vision Tank

The 9th Annual Vision Tank Start-Up Competition is now open for submissions!

  • Do you know an early-stage start-up?
  • Are they developing a new product or service that uses or enables computer vision or visual AI?
  • Do you think they’re doing something innovative and impactful?

If so, it’s easy to nominate them.

If your nominee is one of the five finalists, you’ll receive up to three Embedded Vision Summit passes for you and your teammates (a $3,585 value). Or, if you’re an early-stage start-up developing a new product or service that uses or enables computer vision or visual AI, we invite you to submit an application yourself.

Entries close February 8. Don’t miss out on this great opportunity. Nominate now!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


Fundamentals of Training AI Models for Computer Vision ApplicationsGMAC Intelligence
In this presentation, Amit Mate, the founder and CEO of GMAC Intelligence, introduces the essential aspects of training convolutional neural networks (CNNs). He discusses the prerequisites for training, including models, data and training frameworks, with an emphasis on the characteristics of data needed for effective training. He also explores the model training process using visuals to explain the error surface and gradient-based learning techniques. Mate’s discussion covers key hyperparameters, loss functions and how to monitor the health of the training process. He also addresses the common training problems of overfitting and underfitting, and offers practical rules of thumb for mitigating these issues. Finally, he introduces popular training frameworks and provides resources for further learning.

Deep Neural Network Training: Diagnosing Problems and Implementing SolutionsSensor Cortek
In this talk, Fahed Hassanat, Chief Operating Officer and Head of Engineering at Sensor Cortek, delves into some of the most common problems that arise when training deep neural networks. He provides a brief overview of essential training metrics, including accuracy, precision, false positives, false negatives and F1 score. Hassanat then explores training challenges that arise from problems with hyperparameters, inappropriately sized models, inadequate models, poor-quality datasets, imbalances within training datasets and mismatches between training and testing datasets. To help detect and diagnose training problems, he also covers techniques such as understanding performance curves, recognizing overfitting and underfitting, analyzing confusion matrices and identifying class interaction issues.


Next-generation Computer Vision Methods for Automated Navigation of Unmanned AircraftImmervision
Unmanned aircraft systems (UASs) need to perform accurate autonomous navigation using sense-and-avoid algorithms under varying illumination conditions. This requires robust algorithms able to perform consistently, even when image quality is poor. In this presentation, Julie Buquet, Applied Researcher for Imaging and AI at Immervision, shares the results of Immervision’s research on the impact of noise and blur on corner detection algorithms and CNN-based 2D object detectors used for drone navigation. Specifically, she shows how to fine-tune these algorithms to make them effective in extreme low light (0.5 lux) and on images with high levels of noise or blur. She also highlights the main benefits of using such computer vision methods for drone navigation.

A Computer Vision System for Autonomous Satellite ManeuveringSCOUT Space
Computer vision systems for mobile autonomous machines experience a wide variety of real-world conditions and inputs that can be challenging to capture accurately in training datasets. Few autonomous systems experience more challenging conditions than those in orbit. In this talk, Andrew Harris, Spacecraft Systems Engineer at SCOUT Space, describes how SCOUT Space has designed and trained satellite vision systems using dynamic and physically informed synthetic image datasets. Harris describes how his company generates synthetic data for this challenging environment and how it leverages new real-world data to improve our datasets. In particular, he explains how these synthetic datasets account for and can replicate real sources of noise and error in the orbital environment, and how his company supplements them with in-space data from the first SCOUT-Vision system, which has been in orbit since 2021.


Mastering Image Quality: The Power of Imaging Signal Processors in Embedded Vision – e-con Systems Webinar: January 24, 2024, 9:00 am PT

Optimizing Camera Design for Machine Perception Via End-to-end Camera Simulation – Immervision Webinar: February 6, 2024, 9:00 am PT

Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events


Imagination Technologies and MulticoreWare Collaboration Accelerates Automotive Compute Workloads on Texas Instruments SoCs

AMD Reshapes the Automotive Industry with Advanced AI Engines and Elevated In-Vehicle Experiences

BrainChip Unveils an Akida Neuromorphic Processor Enabled by Microchip Technology’s 32-bit MPU

STMicroelectronics Reveals a New Global-shutter Image Sensor That Offers High Resolution In a Compact Form Factor with Low Power Consumption

D3 Announces a mmWave Radar Kit Based on a Texas Instruments Sensor

More News


Deci Deep Learning Development Platform (Best Edge AI Developer Tool)Deci
Deci’s Deep Learning Development Platform is the 2023 Edge AI and Vision Product of the Year Award winner in the Edge AI Developer Tools category. Deci’s deep learning development platform empowers AI developers with a new way to develop production grade computer vision models. With Deci, teams simplify and accelerate the development process with advanced tools to build, train, optimize, and deploy highly accurate and efficient models to any environment including edge devices. Models developed with Deci deliver a combination of high accuracy, speed and compute efficiency and therefore allow teams to unlock new applications on resource constrained edge devices and migrate workloads from cloud to edge. Also, Deci enables teams to shorten development time and lower operational costs by up to 80%. Deci’s platform is powered by its proprietary Automated Neural Architecture Construction (AutoNAC) technology, the most advanced algorithmic optimization engine that generates best-in-class deep learning model architectures for any vision-related task. With AutoNAC, teams can easily build custom, hardware-aware production grade models that deliver better than state-of-the-art performance.

Please see here for more information on Deci’s Deep Learning Development Platform. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top