A NEWSLETTER FROM THE EDGE AI AND VISION ALLIANCE
|
LETTER FROM THE EDITOR |
|
Dear Colleague, The Edge AI and Vision Alliance is proud to showcase companies that will be presenting the latest computer vision and AI technologies at CES. Check out our Directory of Alliance Members at CES to see what they are showing, where to find them and easy ways to set up appointments for suite and demo visits.
Edge AI and Vision Technology Companies to See at CES 2026:
I’m also pleased to announce that registration for the 2026 Embedded Vision Summit is at its lowest price, 35% off, from now through December 31! The Summit will take place May 11-13 in Santa Clara, California, and we very much hope to see all of you there. If you’d like to present at the 2026 Summit, our Call for Presentation Proposals also remains open, as we’ve extended the deadline to Friday, December 19. Check out the 2026 topics list on the Call for Proposals page, and submit your proposal today. Erik Peters |
BUILDING AND DEPLOYING REAL-WORLD ROBOTS |
VISION DEPLOYED CONSUMER PRODUCTS |
|
Enabling Ego Vision Applications on Smart Eyewear Devices Ego vision technology is revolutionizing the capabilities of smart eyewear, enabling applications that understand user actions, estimate human pose and provide spatial awareness through simultaneous localization and mapping (SLAM). This presentation dives into the latest advancements in deploying these computer vision techniques on embedded systems. Francesca Palermo, Research Principal Investigator at EssilorLuxottica, explains how her company overcomes the challenges of constrained processing power, memory and energy consumption while still achieving real-time, on-device performance for smart eyewear. In particular, she shares insights on optimizing neural networks for low-power environments, innovating in pose estimation and effectively integrating SLAM in dynamic settings, all supported by real-world examples and demonstrations. She also explores how these capabilities open new possibilities for augmented reality, assistive technologies and enhanced personal health. |
|
AI-powered Scouting: Democratizing Talent Discovery in Sports In this presentation, Jonathan Lee, Chief Product Officer at ai.io, shares his company’s experience using AI and computer vision to revolutionize talent identification in sports. By developing aiScout, a platform that enables athletes to upload drill videos for AI evaluation, ai.io aims to democratize access to scouting. Leveraging 3DAT, their AI-driven biomechanics analysis tool, they extract precise movement data without sensors or wearables. Lee discusses how AI-driven scouting levels the playing field, allowing athletes to get discovered based on ability, not access—proven with elite athletes from the Tokyo Olympics to the NFL Scouting Combine. He also covers the business model, scalability and future of AI-driven scouting, highlighting its potential to redefine talent discovery and development. |
KNOWLEDGE DISTILLATION AND OBJECT DETECTORS |
|
Introduction to Knowledge Distillation: Smaller, Smarter AI Models for the Edge As edge computing demands smaller, more efficient models, knowledge distillation emerges as a key approach to model compression. In this presentation, David Selinger, CEO of Deep Sentinel, delves into the details of this process, exploring what knowledge distillation entails and the requirements for its implementation, including dataset size and tools. Selinger examines when to use knowledge distillation, its pros and cons, and showcases examples of successfully distilled models. Based on performance data highlighting the benefits of distillation, he concludes that knowledge distillation is a powerful tool for creating smaller, smarter models that thrive at the edge. |
|
Object Detection Models: Balancing Speed, Accuracy and Efficiency Deep learning has transformed many aspects of computer vision, including object detection, enabling accurate and efficient identification of objects in images and videos. However, choosing the right deep neural network-based object detector for your project, particularly when deploying on lightweight hardware, requires consideration of trade-offs between accuracy, speed and computational efficiency. In this talk, Sage Elliott, AI Engineer at Union.ai, introduces the fundamental types of DNN-based object detectors. He covers models such as Faster R-CNN for high-accuracy applications and single-stage models such as YOLO and SSD for faster processing. He discusses lightweight architectures, including MobileNet, EfficientDet and vision transformers, which optimize object detection for resource-constrained environments. You will learn the trade-offs between object detection models for your computer vision applications, enabling informed choices for optimal performance and deployment. |
UPCOMING INDUSTRY EVENTS |
|
AI Everywhere 2025 – EE Times Virtual Event: December 10-11, 2025 Embedded Vision Summit: May 11-13, 2026, Santa Clara, California |
FEATURED NEWS |
|
Qualcomm has released the premium tier Snapdragon 8 Gen 5, driving performance and new user experiences AMD has released its Spartan UltraScale+ FPGA SCU35 Evaluation Kit, and announced Infineon HyperRAM support on the platform Intel has broadened support for LLMs and VLMs with the release of OpenVINO 2025.4 NVIDIA and Synopsys have announced a strategic partnership to revolutionize engineering and design through a raft of initiatives Chips&Media’s WAVE-N v2 Custom NPU delivers higher TOPS and greater power efficiency |






