Object Tracking Functions
Case History: Florim
This blog post was originally published by Onit. It is reprinted here with the permission of Onit. 80 Forklifts for Indoor and Outdoor Logistics Florim is a multinational company recognized in the production of ceramic surfaces. With an innate passion for beauty and design, Florim has been producing ceramic surfaces for every building, architecture and
The Future of Automotive Radar: Miniaturizing Size and Maximizing Performance
Radar has been one of the most significant additions to vehicles in the past two decades. It provides luxury advanced driver assistance system (ADAS) features like adaptive cruise control (ACC), as well as critical safety features like automatic emergency braking and blind spot detection. It has grown from an expensive accessory feature on the most
Embodied AI: How Do AI-powered Robots Perceive the World?
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. While robots have proliferated in recent years in smart cities, factories and homes, we are mostly interacting with robots controlled by classical handcrafted algorithms. These are robots that have a narrow goal and don’t learn from their
Using Synthetic Data to Address Novel Viewpoints for Autonomous Vehicle Perception
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Autonomous vehicles (AV) come in all shapes and sizes, ranging from small passenger cars to multi-axle semi-trucks. However, a perception algorithm deployed on these vehicles must be trained to handle similar situations, like avoiding an obstacle or
The Role of the Inertial Measurement Unit in 3D Time-of-flight Cameras
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. An Inertial Measurement Unit (IMU) detects movements and rotations across six degrees of freedom, representing the types of motion a system can experience. When paired with Time-of-flight (ToF) cameras, it ensures accurate spatial understanding and
Free Webinar Explores How ISPs are Key to Optimizing Image Quality and Computer Vision Accuracy
On January 24, 2024 at 9 am PT (noon ET), Suresh Madhu, Head of Product Marketing, and Arun Asokan, Head of the ISP Division, both of e-Con Systems, will present the free hour webinar “Mastering Image Quality: The Power of Imaging Signal Processors in Embedded Vision,” organized by the Edge AI and Vision Alliance. Here’s
Modern Vehicles See More with Computer Vision
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. The Snapdragon Ride Vision System is designed to enhance vehicle perception for safer driving experiences Today’s drivers reap the benefits of active safety features in their vehicles. Automatic emergency braking, lane departure warnings, blind spot detection and
Heart Rate Detection with Open CV
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. Heart rate detection with Open CV It’s probably no surprise to you that heart rate can be measured using different gadgets like smartphones or smartwatches. But did you know you can measure it using just the camera
“Understanding, Selecting and Optimizing Object Detectors for Edge Applications,” a Presentation from Walmart Global Tech
Md Nasir Uddin Laskar, Staff Machine Learning Engineer at Walmart Global Tech, presents the “Understanding, Selecting and Optimizing Object Detectors for Edge Applications” tutorial at the May 2023 Embedded Vision Summit. Object detectors count objects in a scene and determine their precise locations, while also labeling them. Object detection plays… “Understanding, Selecting and Optimizing Object
“Vision-language Representations for Robotics,” a Presentation from the University of Pennsylvania
Dinesh Jayaraman, Assistant Professor at the University of Pennsylvania, presents the “Vision-language Representations for Robotics” tutorial at the May 2023 Embedded Vision Summit. In what format can an AI system best present what it “sees” in a visual scene to help robots accomplish tasks? This question has been a long-standing… “Vision-language Representations for Robotics,” a
Dragonfly Base: Enhancing Indoor Localization in Challenging Environments with Visual Markers
We’re thrilled to share the second episode of our Dragonfly video series. In this video, we delve deeper into Dragonfly’s capabilities, specifically focusing on how we enhance indoor localization under challenging conditions. Key Takeaways: The significance of visual markers in Computer Vision and Visual SLAM. Our ingenious solution—visual markers on the ceiling—ensuring consistent and accurate
“Introduction to Modern LiDAR for Machine Perception,” a Presentation from the University of Ottawa
Robert Laganière, Professor at the University of Ottawa and CEO of Sensor Cortek, presents the “Introduction to Modern LiDAR for Machine Perception” tutorial at the May 2023 Embedded Vision Summit. In this presentation, Laganière provides an introduction to light detection and ranging (LiDAR) technology. He explains how LiDAR sensors work… “Introduction to Modern LiDAR for
Service Robots – Our New and Efficient Coworkers
When picturing a robot, an intimidating human-like figure might come to mind, but what if they could help businesses gain maximum benefit from their unmatched efficiency and power? Service robots are technological marvels proving to possess the enhanced capabilities of human labor, with fantastic flexibility and unmatched power, while always looking smart. The most common
“Computer Vision in Sports: Scalable Solutions for Downmarkets,” a Presentation from Sportlogiq
Mehrsan Javan, Co-founder and CTO of Sportlogiq, presents the “Computer Vision in Sports: Scalable Solutions for Downmarket Leagues” tutorial at the May 2023 Embedded Vision Summit. Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this requires a fully automated, robust end-to-end pipeline,… “Computer Vision in Sports: Scalable
“A Computer Vision System for Autonomous Satellite Maneuvering,” a Presentation from SCOUT Space
Andrew Harris, Spacecraft Systems Engineer at SCOUT Space, presents the “Developing a Computer Vision System for Autonomous Satellite Maneuvering” tutorial at the May 2023 Embedded Vision Summit. Computer vision systems for mobile autonomous machines experience a wide variety of real-world conditions and inputs that can be challenging to capture accurately… “A Computer Vision System for
“Sensor Fusion Techniques for Accurate Perception of Objects in the Environment,” a Presentation from the Sanborn Map Company
Baharak Soltanian, Vice President of Research and Development for the Sanborn Map Company, presents the “Sensor Fusion Techniques for Accurate Perception of Objects in the Environment” tutorial at the May 2023 Embedded Vision Summit. Increasingly, perceptual AI is being used to enable devices and systems to obtain accurate estimates of… “Sensor Fusion Techniques for Accurate
“Developing an Embedded Vision AI-powered Fitness System,” a Presentation from Peloton Interactive
Sanjay Nichani, Vice President for Artificial Intelligence and Computer Vision at Peloton Interactive, presents the “Developing an Embedded Vision AI-powered Fitness System” tutorial at the May 2023 Embedded Vision Summit. The Guide is Peloton’s first strength-training product that runs on a physical device and also the first that uses AI… “Developing an Embedded Vision AI-powered
How Low-light Camera Modules Power Critical Embedded Vision Applications: A Detailed Look
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Surveillance, inspection, and monitoring applications inevitably deal with poor light conditions – causing noise and loss of detail while capturing images. See what applications need low-light cameras and get a full understanding of how to