Welcome to 2023! The Alliance thanks you for your support this past year and wishes you the very best for the coming year.
Registration is now open for the 2023 Embedded Vision Summit, coming up May 22-25 in Santa Clara, California! The Summit is the premier conference and tradeshow for innovators incorporating computer vision and visual or perceptual AI in products. The program is designed to cover the most important technical and business aspects of practical computer vision, deep learning and perceptual AI. Register now and you can save 25%: don’t delay!
The Vision Tank is the Edge AI and Vision Alliance’s annual start-up competition, showcasing the best new ventures using computer vision or visual AI in their products or services. Open to early-stage companies, entrants are judged on four criteria: technology innovation, business plan, team and business opportunity. The competition is intended for start-ups that:
Have an initial product or prototype
Have ~15 or fewer people
Have raised less than ~$2M in capital
The Vision Tank final round takes place live on stage during the Embedded Vision Summit. Winners receive:
A $5,000 cash award
Membership in the Edge AI and Vision Alliance for one year
Present their new products or product ideas to more than 1,400 influencers and product creators at the 2022 Embedded Vision Summit
Build brand awareness and visibility through Alliance marketing channels
Benefit from advice from top industry experts
Gain introductions to potential investors, customers, employees and suppliers.
For more information and to enter, please see the program page. The submission deadline is March 3 and the application requires detailed information, so don’t delay!
Editor-In-Chief, Edge AI and Vision Alliance
VISUAL AI IN RETAIL APPLICATIONS
The Future of Retail is Here, and It’s Powered by Embedded Computer Vision
Now that computers can see, the way people interact with things will never be the same. Grabango’s checkout-free shopping system, winner of a 2022 Edge AI and Vision Product of the Year Award, allows you to shop in grocery and convenience stores without having to drag every single item over a barcode scanner to check out. Grabango’s technology sees every product, understands what they are and knows where they are at all times. As items leave the store, it generates receipts and charges customers appropriately. The system is deployed as a distributed embedded computing platform, running state-of-the-art neural networks, and is optimized for very high transaction volumes. In this talk, Will Glaser, Founder and Chief Executive Officer of Grabango, provides an overview of Grabango’s system and discusses how vision-based checkout systems are harder to design and build than alternative approaches but deliver compelling benefits. He then covers why an embedded approach is so critical to serving consumers’ real-world needs.
Instant Item Training: Practical AI for the Retail Industry
Mashgin makes computer vision-based checkout systems for convenience stores, airports, cafeterias and stadiums. Building AI applications for the real world presents unique challenges—both business and technical. In this presentation, Mukul Dhankhar, Co-founder and CTO of Mashgin, reviews the business and technical requirements for automated checkout, discusses the benefits delivered by Mashgin’s system and then focuses on the development of a critical feature: Instant Item Training. Instant Item Training allows new items to be added to Mashgin’s system in minutes, not hours or days, greatly increasing the system’s usability by store personnel. Dhankhar talks about the business drivers behind this feature, the related technical challenges, and how his company overcame them.
MULTI-MODAL SENSOR FUSION
How to Enhance Edge AI Vision Using Multi-Modal Sensing
Machine learning-based vision edge AI has wide applicability across a variety of segments, including consumer electronics, home security, smart buildings, smart city and factory automation. To date, most vision edge AI implementations have focused solely on vision–detecting people, objects and activities. Moreover, implementations have suffered from high power consumption, typically requiring AC power. These two factors have limited the penetration of vision edge AI. In this presentation, Shay Kamin Braun, Director of Low-power AI Marketing at Synaptics, describes a modern approach based around the Katana low-power edge AI SoC that improves performance and optimizes power consumption by fusing together inputs from a variety of sensors (including vision, sound and environmental, among others) into an AI processor running multiple ML models in parallel. He shows how this approach enables the design of more intelligent, context-aware, battery-powered edge AI inference devices, significantly broadening the usefulness and penetration of vision edge AI across multiple markets and new applications.
Strategies and Methods for Sensor Fusion
Highly autonomous machines require advanced perception capabilities. Autonomous machines are generally equipped with three main sensor types: cameras, lidar and radar. The intrinsic limitations of each sensor affect the performance of the perception task. One way to increase overall performance is to combine the information coming from different sensor types. This is the objective of sensor fusion: to combine the information from different sensors and thus improve the perceptual ability of the system. This way the system can better operate under challenging environmental conditions by relying on the sensor data that is the least impacted by the current situation (e.g. poor lighting, adverse weather). In this talk, Robert Laganiere, CEO of Sensor Cortek, presents the main sensor fusion strategies that can be used for combining heterogeneous sensor data. In particular, he explores the three primary fusion methods that can be applied in a perception system: early fusion, late fusion and mid-level fusion.
EDGE AI AND
VISION PRODUCT OF THE YEAR WINNER SHOWCASE
Blaize Pathfinder P1600 Embedded System on Module (Best Edge AI Processor)
Blaize’s Pathfinder P1600 Embedded System on Module (SoM) is the 2022 Edge AI and Vision Product of the Year Award winner in the Edge AI Processors category. The Blaize Pathfinder P1600 Embedded SoM, based on the Blaize Graph Streaming Processor (GSP) architecture, enables new levels of processing power at low power with high system utilization ideal for AI inferencing workloads in edge-based applications. Smaller than the size of a credit card, the P1600 operates with 50x lower memory bandwidth, 7 W of power at 16 TOPS, 10x lower latency, and 30x better efficiency than legacy GPUs – opening doors to previously unfeasible AI inferencing solutions for edge vision use cases including in-camera and machine at the sensor edge, and network edge equipment. The Pathfinder platform is 100% programmable via the Blaize Picasso SDK, a comprehensive software environment that accelerates AI development cycles, uniquely based on open standards – OpenCL, OpenVX, and supporting ML frameworks such as TensorFlow, Pytorch, Caffe2, and ONNX. The Picasso SDK permits building complete end-to-end applications with higher transparency, flexibility, and portability levels.
Please see here for more information on Blaize’s Pathfinder P1600 Embedded SoM. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.
Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.