Software for Embedded Vision

Is End-to-end the Endgame for Level 4 Autonomy?
Examples of modular, end-to-end, and hybrid software architectures deployed in autonomous vehicles. Autonomous vehicle technology has evolved significantly over the past year. The two market leaders, Waymo and Apollo Go, both have fleets of over 1,000 vehicles and operate in multiple cities, and a mix of large companies such as Nvidia and Aptiv, OEMs such

Microchip Technology Demonstration of AI-powered Face ID on the Polarfire SoC FPGA Using the Vectorblox SDK
Avery Williams, Channel Marketing Manager for Microchip Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Williams demonstrates ultra-efficient AI-powered facial recognition on Microchip’s PolarFire SoC FPGA using the VectorBlox Accelerator SDK. Pre-trained neural networks are quantized to INT8 and compiled to run directly on

How to Think About Large Language Models on the Edge
This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip. ChatGPT was released to the public on November 30th, 2022, and the world – at least, the connected world – has not been the same since. Surprisingly, almost three years later, despite massive adoption, we do not

3LC Demonstration of Catching Synthetic Slip-ups with 3LC
Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates the investigation of a curious embryo classification study from Norway, where synthetic data was supposed to help train a model – but something didn’t quite hatch right. Using 3LC to

Software-defined Vehicles Drive Next-generation Auto Architectures
SDV Level Chart: Major OEMs compared. The automotive industry is undergoing a foundational shift toward Software-Defined Vehicles (SDVs), where vehicle functionality, user experience, and monetization opportunities are governed increasingly by software rather than hardware. This evolution, captured comprehensively in the latest IDTechEx report, “Software-Defined Vehicles, Connected Cars, and AI in Cars 2026-2036: Markets, Trends, and

One Year of Qualcomm AI Hub: Enabling Developers and Driving the Future of AI
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. The past year has been an incredible journey for Qualcomm AI Hub. We’ve seen remarkable growth, innovation and momentum — and we’re only getting started. Qualcomm AI Hub has become a key resource for developers looking to

3LC Demonstration of Debugging YOLO with 3LC’s Training-time Truth Detector
Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates how to uncover hidden treasures in the COCO dataset – like unlabeled forks and phantom objects – using his platform’s training-time introspection tools. In this demo, 3LC eavesdrops on a

VeriSilicon Demonstration of the Open Se Cura Project
Chris Wang, VP of Multimedia Technologies and a member of CTO office at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Wang demonstrates examples from the Open Se Cura Project, a joint effort between VeriSilicon and Google. The project showcases a scalable, power-efficient, and

R²D²: Training Generalist Robots with NVIDIA Research Workflows and World Foundation Models
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. A major challenge in robotics is training robots to perform new tasks without the massive effort of collecting and labeling datasets for every new task and environment. Recent research efforts from NVIDIA aim to solve this challenge

Synopsys Demonstration of Siengine’s AD1000 ADAS Chip, Powered by Synopsys NPX6 NPU IP
Gordon Cooper, Principal Product Manager at Synopsys, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Cooper demonstrates the powerful SiEngine AD1000 NPU and the robust toolchain including debugger, profiler, and simulator, which features Synopsys NPX6 NPU IP. Learn how the platform supports TensorFlow, ONNX, and

Synopsys and Visionary.ai Demonstration of a Low-light Real-time AI Video Denoiser Tailored for NPX6 NPU IP
Gordon Cooper, Principal Product Manager at Synopsys, and David Jarmon, Senior VP of Worldwide Sales at Visionary.ai, demonstrates the companies’ latest edge AI and vision technologies and products in Synopsys’ booth at the 2025 Embedded Vision Summit. Specifically, Cooper and Jarmon demonstrate the future of low-light imaging with Visionary.ai’s cutting-edge real-time AI video denoiser. This

SqueezeBits Demonstration of On-device LLM Inference, Running a 2.4B Parameter Model on the iPhone 14 Pro
Taesu Kim, CTO of SqueezeBits, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Kim demonstrates a 2.4-billion-parameter large language model (LLM) running entirely on an iPhone 14 Pro without server connectivity. The device operates in airplane mode, highlighting on-device inference using a hybrid approach that

Synthetic Data for Computer Vision
This article was originally published at Synetic AI’s website. It is reprinted here with the permission of Synetic AI. Synthetic data is changing how computer vision models are being trained. This page will explain synthetic data and how it compares to traditional approaches. After exploring the main methods of creating synthetic data, we’ll help you

Sony Semiconductor Demonstration of Its Open-source Edge AI Stack with the IMX500 Intelligent Sensor
JF Joly, Product Manager for the AITRIOS platform at Sony Semiconductor, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Joly demonstrates Sony’s fully open-source software stack that enables the creation of AI-powered cameras using the IMX500 intelligent vision sensor. In this demo, Joly illustrates how

Edge AI Today: Real-world Use Cases for Developers
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Developers today face increasing pressure to deliver intelligent features with tighter timelines, constrained resources, and heightened expectations for privacy, performance, and accuracy. This article highlights real-world Edge AI applications already in production and mainstream use—providing actionable inspiration

Autonomous Driving Software and AI in Automotive 2026-2046: Technologies, Markets, Players
For more information, visit https://www.idtechex.com/en/research-report/autonomous-driving-software-and-ai-in-automotive/1111. The global autonomous driving software market in 2046 will be greater than US$130 billion This report provides an analysis of the software market for ADAS and autonomous driving software. Topic coverage includes business models, hardware, and software paradigms and trends developing in the market for ADAS and autonomous driving. IDTechEx

Improving Synthetic Data Augmentation and Human Action Recognition with SynthDa
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Human action recognition is a capability in AI systems designed for safety-critical applications, such as surveillance, eldercare, and industrial monitoring. However, many real-world datasets are limited by data imbalance, privacy constraints, or insufficient coverage of rare but

Video Self-distillation for Single-image Encoders: Learning Temporal Priors from Unlabeled Video
This blog post was originally published at Nota AI’s website. It is reprinted here with the permission of Nota AI. Proposes a simple next-frame prediction task using unlabeled video to enhance single-image encoders. Injects 3D geometric and temporal priors into image-based models without requiring optical flow or object tracking. Outperforms state-of-the-art self-supervised methods like DoRA