Functions
scroll to learn more or view by subtopic
The listing below showcases the most recently published content associated with various AI and visual intelligence functions.
View all Posts

“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI
Samuel Örn, Team Lead and Senior Machine Learning and Computer Vision Engineer at Invision AI, presents the “Using a Collaborative Network of Distributed Cameras for Object Tracking” tutorial at the May 2023 Embedded Vision Summit. Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling… “Using a Collaborative Network of

ProHawk Technology Group Overview of AI-enabled Computer Vision Restoration
Brent Willis, Chief Operating Officer of the ProHawk Technology Group, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Willis discusses the company’s AI-enabled computer vision restoration technology. ProHawk’s patented algorithms and technologies enable real-time, pixel-by-pixel video restoration, overcoming virtually all environmental

DeGirum Demonstration of Streaming Edge AI Development and Deployment
Konstantin Kudryavtsev, Vice President of Software Development at DeGirum, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Kudryavtsev demonstrates streaming edge AI development and deployment using the company’s JavaScript and Python SDKs and its cloud platform. On the software front, DeGirum

Cadence Demonstrations of Generative AI and People Tracking at the Edge
Amol Borkar, Director of Product and Marketing for Vision and AI DSPs at Cadence Tensilica, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Borkar demonstrates two applications running on customers’ SoCs, showcasing Cadence’s pervasiveness in AI. The first demonstration is of

“Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI
Vaibhav Ghadiok, Co-founder and CTO of Hayden AI, presents the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic enforcement, parking and asset management. In this talk,… “Reinventing Smart Cities with Computer

“Item Recognition in Retail,” a Presentation from 7-Eleven
Sumedh Datar, Senior Machine Learning Engineer at 7-Eleven, presents the “Item Recognition in Retail” tutorial at the May 2023 Embedded Vision Summit. Computer vision has vast potential in the retail space. 7-Eleven is working on fast frictionless checkout applications to better serve customers. These solutions range from faster checkout systems… “Item Recognition in Retail,” a

“Lessons Learned in Developing a High-volume, Vision-enabled Coffee Maker,” an Interview with Keurig Dr Pepper
Jason Lavene, Director of Advanced Development Engineering at Keurig Dr Pepper, talks with Jeff Bier, Founder of the Edge AI and Vision Alliance, for the “Lessons Learned in Developing a High-volume, Vision-enabled Coffee Maker” interview at the May 2023 Embedded Vision Summit. Why did Keurig Dr Pepper—a $12B beverage company—spend… “Lessons Learned in Developing a

“Multiple Object Tracking Systems,” a Presentation from Tryolabs
Javier Berneche, Senior Machine Learning Engineer at Tryolabs, presents the “Multiple Object Tracking Systems” tutorial at the May 2023 Embedded Vision Summit. Multiple object tracking (MOT) is an essential capability in many computer vision systems, including applications in fields such as traffic control, self-driving vehicles, sports and more. In this… “Multiple Object Tracking Systems,” a

Reimagining Indoor Localization with Dragonfly: A Glimpse into Uncharted Precision
This blog post was originally published by Onit. It is reprinted here with the permission of Onit. Hello tech enthusiasts! Today, we’re diving into the dynamic world of indoor localization once again, this time with a closer look at the ingenious technology driving Dragonfly. As many of you are already aware, Dragonfly stands as a

“Introduction to Semantic Segmentation,” a Presentation from Au-Zone Technologies
Sébastien Taylor, Vice President of Research and Development at Au-Zone Technologies, presents the “Introduction to Semantic Segmentation” tutorial at the May 2023 Embedded Vision Summit. Vision applications often rely on object detectors, which determine the nature and location of objects in a scene. But many vision applications require a different… “Introduction to Semantic Segmentation,” a