Edge AI and Vision Insights: July 9, 2025

NEW APPROACHES TO VISION AND MULTIMEDIA AT THE EDGE

How Qualcomm Is Powering AI-Driven Multimedia at the Edge
In this 2025 Embedded Vision Summit talk, Ning Bi, Vice President of Engineering at Qualcomm Technologies, explores the evolution of multimedia processing at the edge, from simple early use cases such as audio and video processing powered by algorithm-centric approaches to modern sophisticated capabilities such as digital human avatars that are transmitted over the communication channel, powered by data-driven AI. He explains how Qualcomm is applying AI and generative AI technologies on the edge to enrich computer vision for new and high-quality visual solutions. He also shows how Qualcomm enables a broad range of OEMs, ODMs and third-party developers to harness innovative technologies via initiatives such as the Qualcomm AI Hub, which provides a library of optimized machine learning models to enable developers to quickly incorporate AI into their applications.

A Re-imagination of Embedded Vision System Design
Embedded vision applications, with their demand for ever more processing power, have been driving up the size and complexity of edge SoCs. Heterogeneous architectures that bring together CPUs, GPUs and specialized NPUs have become the standard approach to achieving high-performance density and compelling headline specifications for edge vision; yet this approach is not without its problems, especially given the rapid evolution of AI models and the anticipated thermal challenges associated with more advanced process nodes. While software is set to be the true enabler of success, with community-wide initiatives such as the UXL Foundation empowering application developers to port code seamlessly to edge devices, it is clear that flexible, parallel and, most importantly, programmable hardware is central to delivering high-performance, high-efficiency vision applications at the edge. In this 2025 Embedded Vision Summit talk, Dennis Laudick, Vice President of Product Management and Marketing at Imagination Technologies, presents a new solution for edge AI from the experts in GPU design and parallel computing.

RAPID DEVELOPMENT OF VISION SOLUTIONS

Rapid Development of AI-powered Embedded Vision Solutions—Without a Team of Experts
In this 2025 Embedded Vision Summit presentation, Marcel Wouters, Senior Software Engineer at Network Optix, shows how developers new to AI can quickly and easily create embedded vision solutions that extract valuable information from camera streams. He begins by explaining how to set up the Network Optix server, client and AI manager. He then shows how to incorporate an off-the-shelf AI model and construct an AI pipeline, including post-processing to create a complete video analytics application. Wouters also explains how to create and integrate a custom post-processor block and how to integrate a web server to deliver visual results. You’ll learn how to quickly develop AI-powered embedded vision solutions without hiring your own team of experts.

Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effective Solutions
Many computer vision projects reach proof of concept but stall before production due to high costs, deployment challenges and infrastructure complexity. This 2025 Embedded Vision Summit presentation from Kit Merker, CEO of Plainsight Technologies, explores the path from prototype to production, focusing on how to reduce the cost of vision workloads while ensuring scalability, security and efficiency. Plainsight’s new Pixel Prompts enables AI at the edge—running vision workloads on CPUs instead of GPUs, cutting costs and reducing GPU dependency. Merker covers architecture choices, key cost drivers and strategies for building and deploying efficient vision applications. A live demo showcases how to go from a simple model to a production-ready deployment while optimizing compute resources and infrastructure. Whether you’re an engineer, architect or decision-maker, this talk provides actionable insights into moving beyond proof-of-concepts  and scaling vision AI effectively.

FEATURED NEWS

Renesas’ 1-GHz RA8P1 Devices with AI Acceleration Raise the MCU Performance Bar 

Chips&Media’s New APV CODEC Delivers Enhanced Visual Quality 

VeriSilicon’s Silicon-proven ZSP5000 Vision Core Series Expands Its DSP Portfolio for Edge Intelligence 

FRAMOS’ ImagingNext Event for Embedded Vision Takes Place in Munich, Germany in September 

Cadence’s Tensilica NeuroEdge 130 AI Co-processor Accelerates Physical AI Applications 

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE



Visidon Real-time Video Noise Reduction (Best Edge AI Software or Algorithm)

Visidon’s Real-time Video Noise Reduction is the 2025 Edge AI and Vision Product of the Year Award Winner in the Edge AI Software or Algorithm category. Visidon’s AI-powered Video Noise Reduction technology significantly enhances low-light video quality for surveillance and security applications by addressing key shortcomings of traditional ISP-based methods. Conventional noise reduction often compromises essential image details, resulting in blurred footage that can obscure critical information, particularly in extremely low-light conditions. In contrast, Visidon’s advanced convolutional neural network (CNN) technology effectively overcomes these limitations, delivering clarity and detail even in environments as dark as 0.01 lux. 

Visidon employs a purpose-built, AI-optimized algorithm specifically designed for surveillance applications. This approach enhances object detection accuracy, improving recognition rates by up to 50% compared to footage processed with traditional ISP noise reduction. With real-time performance as a priority, the technology utilizes high-efficiency neural processing units (NPUs) in smart cameras, allowing for seamless and power-efficient deployment on embedded devices. This capability ensures high-quality noise reduction without compromising energy consumption or device compatibility. Unlike other solutions that focus on producing visually appealing outputs for consumer devices, Visidon’s technology prioritizes optimization for machine vision. By preserving crucial details essential for analytics and object recognition, it delivers unmatched precision for demanding applications such as smart surveillance. With the growing integration of high-performance NPUs in smart surveillance cameras, Visidon’s CNN-based approach can be implemented effortlessly, resulting in a transformative improvement in video quality during low-light scenarios.

Please see here for more information on Visidon’s Real-time Video Noise Reduction. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top