LETTER FROM THE EDITOR |
Dear Colleague,
The nomination submission deadline for the inaugural AI Innovation Awards, brought to you by the Edge AI and Vision Alliance, is next Sunday, December 31. The awards celebrate groundbreaking end products powered by edge AI and vision technologies. If you know of an end product introduced in the last year that fits the bill, nominate it for a chance to gain industry recognition. It’s easy and free! Find out more here!
December 31 is also the deadline for Edge AI and Vision Alliance Member companies to submit entries for the 2024 Edge AI and Vision Alliance Product of the Year awards. Entry is easy and winners get year-round promotion from the Alliance. If you released an awesome building-block technology product enabling computer vision or edge AI in 2023, submit an application today!
At CES (January 9-12 in Las Vegas), Edge AI and Vision Alliance Member companies will be showing off the latest building-block technologies that enable new capabilities for machines that perceive and understand. CES is huge, so we’ve created a handy checklist of these companies and where to find them, including how to request suite/demo access. See here for details!
The Alliance will be taking a holiday break next week. Until next time, on behalf of the Alliance, I wish you joy, health, and happiness for the holiday season, and for the New Year. Happy Holidays!
Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance |
INSIGHTS FROM INDUSTRY EXPERTS |
Embedded Vision in Robotics, Biotech and Education
In his 2018 keynote presentation at the Embedded Vision Summit, legendary inventor and technology visionary Dean Kamen memorably predicted that embedded vision capabilities would eventually become as common as limit switches—i.e., used universally to enable systems to understand their environments. In this 2023 conversation, Jeff Bier, Founder of the Edge AI and Vision Alliance, catches up with Kamen on how visual AI is currently being used in his projects. These projects include mobile robots (developed by his company, DEKA Research and Development), new processes for the large-scale manufacturing of engineered tissues and tissue-related technologies (led by Kamen’s Advanced Regenerative Manufacturing Institute) and the educational programs organized by the FIRST (For Inspiration and Recognition of Science and Technology) program. Kamen and Bier explore where visual AI is paying off as Kamen envisioned, and what challenges are limiting his projects from realizing its full potential.
Lessons Learned in Developing a High-volume, Vision-enabled Coffee Maker
Why did Keurig Dr Pepper—a $12B beverage company—spend years perfecting visual AI technology to recognize the type of beverages consumers are preparing in their Keurig brewers? Does computer vision really yield better coffee? What does it take to make sophisticated AI technology invisible to consumers? What were the key challenges in delivering one of the first successful mass-market consumer products incorporating visual AI? What were the key skills required for the in-house product development team, and what could be outsourced? Jeff Bier interviews Jason Lavene, Director of Advanced Development Engineering at Keurig Dr Pepper, for insights into key lessons learned from this pioneering product development effort.
|
OBJECT DETECTION, DIFFERENTIATION AND IDENTIFICATION |
Introduction to Semantic Segmentation
Vision applications often rely on object detectors, which determine the nature and location of objects in a scene. But many vision applications require a different type of visual understanding: semantic segmentation. Semantic segmentation classifies each pixel of an image, associating each pixel with an object class (e.g., pavement, pedestrian). This is required for separating foreground objects from background, for example, or for identifying drivable surfaces for autonomous vehicles. A related type of functionality is object segmentation, which associates each pixel with a specific object (e.g., pedestrian #4), and panoptic segmentation, which combines the functionality of semantic and instance segmentation. In this talk, Sébastien Taylor, Vice President of Research and Development at Au-Zone Technologies, introduces deep learning-based semantic, instance and panoptic segmentation. He explores the network topologies commonly used and how they are trained. He also discusses metrics for evaluating segmentation algorithm output, and considerations when selecting segmentation algorithms. Finally, he identifies resources useful for developers getting started with segmentation.
Item Recognition in Retail
Computer vision has vast potential in the retail space. 7-Eleven is working on fast frictionless checkout applications to better serve customers. These solutions range from faster checkout systems to fully automated cashierless stores. A key goal for such solutions is to ensure high accuracy and consistent customer experience across thousands of stores. In this talk, Sumedh Datar, Senior Machine Learning Engineer at 7-Eleven, focuses on how his company has built scalable item recognition models and algorithms that work on tens of thousands of products. He discusses the challenges 7-Eleven faces in building practical, edge-based solutions—such as the need to recognize thousands of items with varying packaging and sizes, and the need to deploy systems on constrained hardware—and he explains the techniques his company has employed to overcome these challenges.
|
UPCOMING INDUSTRY EVENTS |
Mastering Image Quality: The Power of Imaging Signal Processors in Embedded Vision – e-con Systems Webinar: January 24, 2024, 9:00 am PT
Optimizing Camera Design for Machine Perception Via End-to-end Camera Simulation – Immervision Webinar: February 6, 2024, 9:00 am PT
Embedded Vision Summit: May 21-23, 2024, Santa Clara, California
More Events
|
FEATURED NEWS |
STMicroelectronics’ Next-generation Multizone Time-of-flight Sensor Boosts Ranging Performance and Power Savings
Intel Accelerates AI Everywhere with Launch of Powerful Next-generation Processor Products
AMD Showcases Growing Momentum for Its AI Processing Solutions from the Data Center to PCs
Ambarella Unveils Full Software Stack for Autonomous and Semi-autonomous Driving, Optimized for its CV3-AD Central AI Domain Controller Family
Renesas Delivers New RA8 MCU Family Targeting Graphic Display Solutions and Voice/Vision Multimodal AI Applications
More News
|
EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE |
Qualcomm Cognitive ISP (Best Camera or Sensor)
Qualcomm’s Cognitive ISP is the 2023 Edge AI and Vision Product of the Year Award winner in the Cameras and Sensors category. The Cognitive ISP (within the Snapdragon 8 Gen 2 Mobile Platform) is the only ISP for smartphones that can apply the AI photo-editing technique called “Semantic Segmentation” in real-time. Semantic Segmentation is like “Photoshop layers,” but handled completely within the ISP. It will turn great photos into spectacular photos. Since it’s real-time, it’s running while you’re capturing photos and videos – or even before. You can see objects in the viewfinder being enhanced as you’re getting ready to shoot. A real-time Segmentation Filter is groundbreaking. This means the camera is truly contextually aware of what it’s seeing. Qualcomm achieved this by building a physical bridge between the ISP and the DSP – it’s called “Hexagon Direct Link”. The DSP runs Semantic Segmentation neural networks in real-time. Thanks to Hexagon Direct Link, the DSP and the ISP can operate simultaneously. The ISP captures images and the DSP assigns context to every image in real-time.
Please see here for more information on Qualcomm’s Cognitive ISP. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts. |