Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

NVIDIA and Synopsys Announce Strategic Partnership to Revolutionize Engineering and Design
Key Highlights Multiyear collaboration spans NVIDIA CUDA accelerated computing, agentic and physical AI, and Omniverse digital twins to achieve simulation speed and scale previously unattainable through traditional CPU computing — opening new market opportunities across

Now Available: AMD Spartan UltraScale+ FPGA SCU35 Evaluation Kit–An Affordable Platform for Every Developer
November 25, 2025 The AMD Spartan™ UltraScale+™ FPGA SCU35 Evaluation Kit is now available for order. Built by AMD, this platform offers customers an accelerated path to production with Spartan UltraScale+ FPGAs. The kit features

The 8-Series Reimagined: Snapdragon 8 Gen 5 Delivers Premium Performance and Experiences
San Diego, California November 25, 2025 — Qualcomm Technologies, Inc. today announced the Snapdragon® 8 Gen 5 Mobile Platform, a premium offering that combines outstanding performance with cutting-edge technologies, raising the bar for flagship mobile

Microcontrollers enter a new growth cycle as the market targets US$34 billion in 2030
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Yole Group releases its annual Status of the Microcontroller Industry report and expands

Why Edge AI Struggles Towards Production: The Deployment Problem
There is no shortage of articles about how to develop and train Edge AI models. The community has also written extensively about why it makes sense to run those models at the edge: to reduce

Nota AI Signs Technology Collaboration Agreement with Samsung Electronics for Exynos AI Optimization “Driving the Popularization of On-Device Generative AI”
Nota AI’s optimization technology integrated into Samsung Electronics’ Exynos AI Studio, enhancing efficiency in on-device AI model development Seoul, South Korea Nov.26, 2025 — Nota AI, a company specializing in AI model compression and

WAVE-N v2: Chips&Media’s Custom NPU Retains 16-bit FP for Superior Efficiency at High TOPS
Nov 26, 2025 Seoul, South Korea — Chips&Media is announcing that the next-generation customized NPU, WAVE-N v2, is ready for release, delivering higher computational power along with improved area and power efficiency. WAVE-N v2 is also

Google Announces LiteRT Qualcomm AI Engine Direct Accelerator
Google has announced a new LiteRT Qualcomm AI Engine Direct Accelerator, giving Android and embedded developers a much more direct path to Qualcomm NPUs for on-device AI and vision workloads. Built on top of Qualcomm’s

Let’s Visit the Zoo
This blog post was originally published at Quadric’s website. It is reprinted here with the permission of Quadric. The term “model zoo” first gained prominence in the world of AI / machine learning beginning in the
Technologies
Samsung to launch in-house mobile GPU by 2027
December 25, 2025, Suwon, South Korea — Samsung Electronics is accelerating plans to bring mobile graphics processing fully in-house, with multiple reports pointing to a proprietary GPU architecture arriving in an Exynos application processor as early as 2027. A report citing Cailian Press, says Samsung’s System LSI Division is pushing toward a “100% proprietary technology”

How Embedded Vision Is Helping Modernize and Future-Proof Retail Operations
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Physical stores are becoming intelligent environments. Embedded vision turns every critical touchpoint into a source of real-time insight, from shelves and kiosks to checkout zones and digital signages. With cameras analyzing activity as it happens, retailers

Groq and Nvidia Enter Non-Exclusive Inference Technology Licensing Agreement to Accelerate AI Inference at Global Scale
Mountain View, CA, December 24 — Today, Groq announced that it has entered into a non-exclusive licensing agreement with Nvidia for Groq’s inference technology. The agreement reflects a shared focus on expanding access to high-performance, low cost inference. As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of
Applications
Samsung to launch in-house mobile GPU by 2027
December 25, 2025, Suwon, South Korea — Samsung Electronics is accelerating plans to bring mobile graphics processing fully in-house, with multiple reports pointing to a proprietary GPU architecture arriving in an Exynos application processor as early as 2027. A report citing Cailian Press, says Samsung’s System LSI Division is pushing toward a “100% proprietary technology”

How Embedded Vision Is Helping Modernize and Future-Proof Retail Operations
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Physical stores are becoming intelligent environments. Embedded vision turns every critical touchpoint into a source of real-time insight, from shelves and kiosks to checkout zones and digital signages. With cameras analyzing activity as it happens, retailers

The Coming Robotics Revolution: How AI and Macnica’s Capture, Process, Communicate Philosophy Will Define the Next Industrial Era
This blog post was originally published at Macnica’s website. It is reprinted here with the permission of Macnica. Just as networking and fiber-optic infrastructure quietly laid the groundwork for the internet economy, fueling the rise of Amazon, Facebook, and the digital platforms that redefined commerce and communication, today’s breakthroughs in artificial intelligence are setting the stage
Functions

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI
Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

TLens vs VCM Autofocus Technology
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications
