Edge AI and Vision Insights: November 12, 2025

 

LETTER FROM THE EDITOR

Dear Colleague,

Next Tuesday, November 18 at 9 am PT, the Yole Group will present the free webinar “How AI-enabled Microcontrollers Are Expanding Edge AI Opportunities” in partnership with the Edge AI and Vision Alliance. Running AI inference at the edge, versus in the cloud, has many compelling benefits; greater privacy, lower latency and real-time responsiveness key among them. But implementing edge AI in highly cost-, power-, or size-constrained devices has historically been impractical due to its compute, memory and storage implementation requirements.

Nowadays, however, the AI accelerators and related resources included in modern microcontrollers, in combination with technology developments and toolset enhancements that shrink the size of deep learning models, are making it possible to run computer vision, speech interfaces, and other AI capabilities at the edge.

In this webinar, Tom Hackenberg, Principal Analyst for Computing at the Yole Group, will explain that while scaling AI upward into massive data centers may dominate today’s headlines, scaling downward to edge applications may be even more transformative. Hackenberg will share market size and forecast data, along with supplier product and developer application case study examples, to support his contention that edge deployment is key to unlocking AI’s full potential across industries. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.

Erik Peters
Director of Ecosystem and Community Engagement, Edge AI and Vision Alliance

DESIGN CONSIDERATIONS FOR ROBOTICS APPLICATIONS

Lessons Learned Building and Deploying a Weed-killing Robot
Agriculture today faces chronic labor shortages and growing challenges around herbicide resistance, as well as consumer backlash to chemical inputs. Smarter, more sustainable approaches are needed to secure the ongoing production of fresh produce. In this session, Xiong Chang, CEO and Co-founder of Tensorfield Agriculture, introduces his company’s unique robot, which uses high-speed computer vision to enable extremely precise, pesticide-free robotic weeding. Chang highlights some of the key challenges his team faced in developing the robot. He explains Tensorfield’s business model, shows how its technology has the potential to save millions of dollars in labor and material costs, and shares how Tensorfield plans to scale its business.

Sensors and Compute Needs and Challenges for Humanoid Robots
Vlad Branzoi, Perception Sensors Team Lead at Agility Robotics, presents the “Sensors and Compute Needs and Challenges for Humanoid Robots” tutorial at the September 2025 Edge AI and Vision Innovation Forum.

AGENTIC AI AT THE EDGE

Introduction to Designing with AI Agents
Artificial intelligence agents are components in an AI system that can perform tasks autonomously, making decisions and taking actions on their own. In this talk, Frantz Lohier, Senior Worldwide Specialist for Advanced Computing, AI and Robotics at Amazon Web Services, explores the concept of AI agents, their benefits and how they can revolutionize AI development. He discusses the differences between agentic and non-agentic workflows, and how agents can improve the performance of existing models through reflection, tool use, planning and multiagent collaboration. Lohier examines the types of AI agents, such as vision agents, LLM agents, math solver agents and code generation agents, and how they can be used in various AI-based systems. He also discusses how agents are created, trained, tested and integrated into AI systems. You’ll gain a working understanding of AI agents, their benefits and how they can be used in AI-based application development.

Building Agentic Applications for the Edge
Along with AI agents, the new generation of large language models, vision-language models and other large multimodal models are enabling powerful new capabilities that promise to transform industries. In this talk, Amit Mate, Founder and CEO of GMAC Intelligence, explores the requirements and architectures of agentic applications, including AI and non-AI requirements, and explores two main approaches to agent-based application architecture: integrating separate models and multimodal approaches. Through detailed examples, he demonstrates the pros and cons of each approach and discusses the challenges and opportunities of building practical agent-based applications on edge devices, including challenges associated with implementing large models at the edge.

UPCOMING INDUSTRY EVENTS

How AI-enabled Microcontrollers Are Expanding Edge AI Opportunities – Yole Group Webinar: November 18, 2025, 9:00 am PT

Embedded Vision Summit: May 11-13, 2026, Santa Clara, California

More Events

FEATURED NEWS

Axelera AI Announces the Europa AIPU, Setting New Industry Benchmark for AI Accelerator Performance, Power Efficiency and Affordability

STMicroelectronics Empowers Data-Hungry Industrial Transformation with a Unique Dual-Range Motion Sensor

NXP Semiconductors Completes the Acquisitions of Aviva Links and Kinara to Advance Automotive Connectivity and AI at the Intelligent Edge

Qualcomm Launches the AI200 and AI250: Redefining Rack-scale Data Center Inference Performance for the AI Era

BrainChip Unveils the Breakthrough AKD1500 Edge AI Co-Processor at Embedded World North America

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE



Qualcomm Snapdragon 8 Elite Platform (Best Edge AI Processor)
Qualcomm’s Snapdragon 8 Elite Platform is the 2025 Edge AI and Vision Product of the Year Award Winner in the Edge AI Processors category. This platform significantly enhances on-device experiences through remarkable processing power, groundbreaking AI advancements, and various mobile innovations. The Snapdragon 8 Elite includes a new custom-built Qualcomm Oryon CPU which delivers impressive speeds and efficiency to enhance every interaction. It provides a 45% performance boost, 44% greater power efficiency, and includes the mobile industry’s largest shared data cache. Additionally, Qualcomm’s Adreno GPU, with its newly designed architecture, achieves a 40% increase in performance and a 40% improvement in efficiency. Overall, users can expect a 27% reduction in power consumption.

The platform enhances user experiences with on-device AI, showcased through the Qualcomm AI Engine, which incorporates multimodal generative AI and personalized support. This AI Engine utilizes a variety of models, including large multimodal models (LMMs), large language models (LLMs), and visual language models (LVMs), while supporting the world’s largest generative AI model ecosystem. It also features Qualcomm’s 45% faster Hexagon NPU, which provides an impressive 45% increase in performance per watt, driving AI capabilities to new levels. Moreover, Qualcomm’s new AI Image Signal Processor (ISP) works in tandem with the Hexagon NPU to enhance real-time image capture. Connectivity options include advanced AI-driven 5G and Wi-Fi 7 capabilities, facilitating seamless entertainment and productivity on the go.

Please see here for more information on Qualcomm’s Snapdragon 8 Elite Platform. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top