Technologies
scroll to learn more or view by subtopic
The listing below showcases the most recently published content associated with various AI and visual intelligence functions.
View all Posts

“Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI
Vaibhav Ghadiok, Co-founder and CTO of Hayden AI, presents the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic enforcement, parking and asset management. In this talk,… “Reinventing Smart Cities with Computer

“Item Recognition in Retail,” a Presentation from 7-Eleven
Sumedh Datar, Senior Machine Learning Engineer at 7-Eleven, presents the “Item Recognition in Retail” tutorial at the May 2023 Embedded Vision Summit. Computer vision has vast potential in the retail space. 7-Eleven is working on fast frictionless checkout applications to better serve customers. These solutions range from faster checkout systems… “Item Recognition in Retail,” a

“Lessons Learned in Developing a High-volume, Vision-enabled Coffee Maker,” an Interview with Keurig Dr Pepper
Jason Lavene, Director of Advanced Development Engineering at Keurig Dr Pepper, talks with Jeff Bier, Founder of the Edge AI and Vision Alliance, for the “Lessons Learned in Developing a High-volume, Vision-enabled Coffee Maker” interview at the May 2023 Embedded Vision Summit. Why did Keurig Dr Pepper—a $12B beverage company—spend… “Lessons Learned in Developing a

“Multiple Object Tracking Systems,” a Presentation from Tryolabs
Javier Berneche, Senior Machine Learning Engineer at Tryolabs, presents the “Multiple Object Tracking Systems” tutorial at the May 2023 Embedded Vision Summit. Multiple object tracking (MOT) is an essential capability in many computer vision systems, including applications in fields such as traffic control, self-driving vehicles, sports and more. In this… “Multiple Object Tracking Systems,” a

Reimagining Indoor Localization with Dragonfly: A Glimpse into Uncharted Precision
This blog post was originally published by Onit. It is reprinted here with the permission of Onit. Hello tech enthusiasts! Today, we’re diving into the dynamic world of indoor localization once again, this time with a closer look at the ingenious technology driving Dragonfly. As many of you are already aware, Dragonfly stands as a

“Introduction to Semantic Segmentation,” a Presentation from Au-Zone Technologies
Sébastien Taylor, Vice President of Research and Development at Au-Zone Technologies, presents the “Introduction to Semantic Segmentation” tutorial at the May 2023 Embedded Vision Summit. Vision applications often rely on object detectors, which determine the nature and location of objects in a scene. But many vision applications require a different… “Introduction to Semantic Segmentation,” a

Flex Logix Demonstration of Its InferX IP for AI Inference Implementing Object Detection at the Edge
Jeremy Roberson, Technical Director and Software Architect for AI and Machine Learning at Flex Logix, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Roberson demonstrates the company’s InferX IP for AI inference at the edge, implementing object detection.

Object Detection and Tracking Step by Step Guide: A Hands-on Exercise
This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. This is the third entry in our Road to Machine Learning series; if you read the first and second entries and did your homework, congratulations! The force is growing on you! By now, you should feel familiar

Enhancing Object Detection: The Impact of Visidon CNN-based Noise Reduction
This blog post was originally published at Visidon’s website. It is reprinted here with the permission of Visidon. In the realm of computer vision, object detection plays a vital role in various applications, including surveillance systems, autonomous driving, and image recognition. However, accurate object detection can be challenging in real-world scenarios due to the presence

NXP Demonstration of AI Functions Implemented on Low-power MCX and RT Microcontrollers
Anthony Huereca, Systems Engineer at NXP Semiconductors, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Huereca demonstrates AI functions running on NXP’s low-power microcontrollers (MCUs). NXP’s new MCX N MCU, which includes the NXP-developed eIQ Neutron NPU, delivers an approximately 40x inference performance improvement when