fbpx

Edge AI and Vision Insights: March 20, 2024

LETTER FROM THE EDITOR
Dear Colleague,2024 Vision Tank

The Edge AI and Vision Alliance is proud to announce the ten semifinalists in the 2024 Vision Tank Start-up Competition. This annual event showcases the best new ventures using or enabling perceptual AI and computer vision. Check out the semifinalists’ pitch videos and descriptions here.

The finalist companies, to be selected and announced soon, will pitch their companies and products live to a panel of judges and the Embedded Vision Summit  audience, both of whom will vote on an award winner. See the 2023 finalist competition video here. And, to experience the 2024 finalist competition in person, attend the upcoming Summit, taking place May 21-23 in Santa Clara, California. Register now using code SUMMIT24-NL for a 15% discount on your conference pass.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DRIVER MONITORING AND ASSISTANCE

ADAS: What’s Working and What Isn’t?Ojo-Yoshida Report and TechInsights
This engaging onstage interview from the 2023 Embedded Vision Summit delves into the latest developments and most important trends in advanced driver-assistance systems (ADAS). Junko Yoshida, Editor-in-Chief of the Ojo-Yoshida Report and one of the foremost reporters covering ADAS, interviews Ian Riches, Vice President of the Global Automotive Practice for TechInsights and a top expert on ADAS markets and technologies. Together, they explore what’s working in ADAS and what isn’t, and why—from both the technology and market perspectives. Yoshida and Riches also share their perspectives on where the industry is heading, where the biggest opportunities lie and what important challenges remain to be overcome. Their insightful conversation offers a unique perspective on the future of ADAS and its impact on the automotive ecosystem.

Tracking and Fusing Diverse Risk Factors to Drive a SAFER FutureNauto
Unless you’re a gang member or drug addict, driving is your top risk. But which risks can you handle and which can kill you? Since 96% of collisions are caused by human error and 75% are unconscious (the driver is not aware of the risk), Nauto used one billion miles of real-life driving data to find out, feeding it into a model fusing 26 road context, driver action/attention and vehicle dynamics factors to predict collisions, near misses and safe driving. As described by Yoav Banin, the company’s Chief Product and Business Development Officer, and Tahmida Mahmud, Nauto’s Engineering Manager for Perception, in this 2023 Embedded Vision Summit presentation, Nauto found that drivers can handle most single risks, but multiple simultaneous risks can be deadly. Some combinations are 1,000 times more dangerous than regular driving. Nauto’s “SAFER” model can be fitted to any late-model vehicle. It can be trained with labeled data from historical trips or through reinforcement learning from observations of excellent and poor drivers. A time series matrix can help you see a few seconds into the future.

VISION FOR SPORTS AND FITNESS

Developing an Embedded Vision AI-powered Fitness SystemPeloton Interactive
The Guide is Peloton’s first strength-training product that runs on a physical device and also the first that uses AI technology. It turns any TV into an interactive personal training studio. Members have access to Peloton’s instructors, who lead classes that use dumbbells and body weight. The Guide uses an innovative combination of hardware, software and user interface; it is one of only a handful of consumer products in this market that use real-time embedded computer vision, and Peloton’s members love it! In this 2023 Embedded Vision Summit talk, Sanjay Nichani, Vice President for Artificial Intelligence and Computer Vision at Peloton Interactive, shares insights into how the computer vision technology behind the Guide was built. He focuses on data, models and processes. The AI and Computer Vision team at Peloton worked through numerous obstacles by using synthetic data, advanced algorithms and field-testing iteration to create an affordable AI-powered device that enables at-home fitness for everyone.

Computer Vision in Sports: Scalable Solutions for DownmarketsSportlogiq
Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this requires a fully automated, robust end-to-end pipeline, spanning from visual input, to player and group activities, to player and team evaluation to planning. Despite major advancements in computer vision and machine learning, today sports analytics solutions are limited to top leagues and are not widely available for downmarket leagues and youth sports. In this 2023 Embedded Vision Summit presentation, Mehrsan Javan, Co-founder and CTO of Sportlogiq, explains how his company has developed scalable and robust computer vision solutions to democratize sport analytics and offer pro-league-level insights to leagues with modest resources, including youth leagues. He highlights key challenges—such as the requirement for low-cost, low-latency processing and the need for robustness despite variations in venues. He discusses the approaches Sportlogiq tried and how it ultimately overcame these challenges, including the use of transformers and fusion of multiple type of data streams to maximize accuracy.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events

FEATURED NEWS

Arm Announces New Automotive Technologies and a Roadmap of Compute Subsystems to Enable Faster Time to Market for AI-enabled Vehicles

AMD Extends Its FPGA Portfolio with the Spartan UltraScale+ Family Built for Cost-sensitive Edge Applications

Visidon’s AI-powered Low-light Video Enhancement is Selected for the Hailo-15 AI Vision Processor

The Intel Core Ultra Extends AI PCs to the Enterprise with a New Intel vPro Platform

STMicroelectronics Expands into 3D Depth Sensing with its Latest Time-of-flight Sensors

More News

EDGE AI AND
VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Piera Systems Canāree Air Quality Monitor (Best Enterprise Edge AI End Product)Piera Systems
Piera Systems’ Canāree Air Quality Monitor is the 2023 Edge AI and Vision Product of the Year Award winner in the Enterprise Edge AI End Products category. The Canāree family of air quality monitors (AQMs) are compact, highly accurate, and easy to use. The Canāree AQMs are also the most innovative, cost-effective air quality monitors owing largely to the quality of the data they produce which in turn helps classify, and in some cases identify, specific pollutants. Identifying when someone is vaping in a school bathroom or a hotel room is a good example of this technology in action. Classification of pollutants is done by employing AI/ML techniques on the highly accurate data produced by the Canāree AQMs. This is the only low-cost AQM in the world with such a capability. The Canāree AQMs measure various environmental factors including particles, temperature, pressure, humidity, and VOCs. While many similar products exist in the market, Canāree is the only one with a highly accurate particle sensor which uniquely sets it apart. Canāree AQMs measure particles ranging from 10 microns in size all the way down to 100 nanometers, a unique capability in this industry. This particle data is distributed into seven size “bins” and these data bins are the foundation for its classification capabilities.

Please see here for more information on Piera Systems’ Canāree Air Quality Monitor. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top