Software for Embedded Vision

Machine Vision Defect Detection: Edge AI Processing with Texas Instruments AM6xA Arm-based Processors
Texas Instruments’ portfolio of AM6xA Arm-based processors are designed to advance intelligence at the edge using high resolution camera support, an integrated image sensor processor and deep learning accelerator. This video demonstrates using AM62A to run a vision-based artificial intelligence model for defect detection for manufacturing applications. Watch the model test the produced units as

“Introduction to Radar and Its Use for Machine Perception,” a Presentation from Cadence
Amol Borkar, Product Marketing Director, and Vencatesh Subramanian, Design Engineering Architect, both of Cadence, co-present the “Introduction to Radar and Its Use for Machine Perception” tutorial at the May 2025 Embedded Vision Summit. Radar is a proven technology with a long history in various market segments and continues to plays an increasingly important role in

Optimizing LLMs for Performance and Accuracy with Post-training Quantization
This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Quantization is a core tool for developers aiming to improve inference performance with minimal overhead. It delivers significant gains in latency, throughput, and memory efficiency by reducing model precision in a controlled way—without requiring retraining. Today, most models

Alif Semiconductor Demonstration of Face Detection and Driver Monitoring On a Battery, at the Edge
Alexandra Kazerounian, Senior Product Marketing Manager at Alif Semiconductor, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Kazerounian demonstrates how AI/ML workloads can run directly on her company’s ultra-low-power Ensemble and Balletto 32-bit microcontrollers. Watch as the AI/ML AppKit runs real-time face detection using an

Nota AI Demonstration of Nota Vision Agent, Next-generation Video Monitoring at the Edge
Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Kim demonstrates Nota Vision Agent—a next-generation video monitoring solution powered by Vision Language Models (VLMs). The solution delivers real-time analytics and intelligent alerts across critical domains including industrial safety,

SiMa.ai Expands Strategic Collaboration with Synopsys to Accelerate Automotive AI Innovation
Transforming ADAS and In-Vehicle Infotainment Breakthroughs with Innovative ML IP, Chiplets, and System-on-Chip Reference Architectures SAN JOSE, Calif., July 30, 2025 /PRNewswire/ — SiMa.ai, a pioneer in ultra-efficient machine learning system-on-chip (MLSoC) platform, today announced the next phase of their strategic collaboration with Synopsys, the leading provider of engineering solutions from silicon to systems, to

Nota AI Demonstration of NetsPresso Optimization Studio, Streamlined with Visual Insights
Tairen Piao, Research Engineer at Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Piao demonstrates NetsPresso Optimization Studio, the latest enhancement to Nota AI’s model optimization platform, NetsPresso. This intuitive interface simplifies the AI optimization process with advanced layer-wise analysis and automated quantization.

Renesas Introduces 64-bit RZ/G3E MPU for High-performance HMI Systems Requiring AI Acceleration and Edge Computing
MPU Integrates a Quad-Core CPU, an NPU, High-Speed Connectivity and Advanced Graphics to Power Next-Generation HMI Devices with Full HD Display TOKYO, Japan, July 29, 2025 ― Renesas Electronics Corporation (TSE:6723), a premier supplier of advanced semiconductor solutions, today announced the launch of its new 64-bit RZ/G3E microprocessor (MPU), a general-purpose device optimized for high-performance Human Machine

How to Run Coding Assistants for Free on RTX AI PCs and Workstations
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. AI-powered copilots deliver real-time assistance for projects from academic projects to production code — and are optimized for RTX AI PCs. Coding assistants or copilots — AI-powered assistants that can suggest, explain and debug code — are

Microchip Technology Demonstration of Real-time Object and Facial Recognition with Edge AI Platforms
Swapna Guramani, Applications Engineer for Microchip Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Guramani demonstrates her company’s latest AI/ML capabilities in action: real-time object recognition using the SAMA7G54 32-bit MPU running Edge Impulse’s FOMO model, and facial recognition powered by TensorFlow Lite’s Mobile

Is End-to-end the Endgame for Level 4 Autonomy?
Examples of modular, end-to-end, and hybrid software architectures deployed in autonomous vehicles. Autonomous vehicle technology has evolved significantly over the past year. The two market leaders, Waymo and Apollo Go, both have fleets of over 1,000 vehicles and operate in multiple cities, and a mix of large companies such as Nvidia and Aptiv, OEMs such

Microchip Technology Demonstration of AI-powered Face ID on the Polarfire SoC FPGA Using the Vectorblox SDK
Avery Williams, Channel Marketing Manager for Microchip Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Williams demonstrates ultra-efficient AI-powered facial recognition on Microchip’s PolarFire SoC FPGA using the VectorBlox Accelerator SDK. Pre-trained neural networks are quantized to INT8 and compiled to run directly on

How to Think About Large Language Models on the Edge
This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip. ChatGPT was released to the public on November 30th, 2022, and the world – at least, the connected world – has not been the same since. Surprisingly, almost three years later, despite massive adoption, we do not

3LC Demonstration of Catching Synthetic Slip-ups with 3LC
Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates the investigation of a curious embryo classification study from Norway, where synthetic data was supposed to help train a model – but something didn’t quite hatch right. Using 3LC to

Software-defined Vehicles Drive Next-generation Auto Architectures
SDV Level Chart: Major OEMs compared. The automotive industry is undergoing a foundational shift toward Software-Defined Vehicles (SDVs), where vehicle functionality, user experience, and monetization opportunities are governed increasingly by software rather than hardware. This evolution, captured comprehensively in the latest IDTechEx report, “Software-Defined Vehicles, Connected Cars, and AI in Cars 2026-2036: Markets, Trends, and

One Year of Qualcomm AI Hub: Enabling Developers and Driving the Future of AI
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. The past year has been an incredible journey for Qualcomm AI Hub. We’ve seen remarkable growth, innovation and momentum — and we’re only getting started. Qualcomm AI Hub has become a key resource for developers looking to

3LC Demonstration of Debugging YOLO with 3LC’s Training-time Truth Detector
Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates how to uncover hidden treasures in the COCO dataset – like unlabeled forks and phantom objects – using his platform’s training-time introspection tools. In this demo, 3LC eavesdrops on a

VeriSilicon Demonstration of the Open Se Cura Project
Chris Wang, VP of Multimedia Technologies and a member of CTO office at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Wang demonstrates examples from the Open Se Cura Project, a joint effort between VeriSilicon and Google. The project showcases a scalable, power-efficient, and