Object Identification Functions
Deep Learning Models Which Pay Attention (Part II): Attention (Special Focus) in Computer Vision
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. In the previous article, I described attention mechanisms by using an example of natural language processing. This method was first used in language processing, but this is not its only usage. We can also use attention mechanisms
A Detailed Look at Using AI in Embedded Smart Cameras
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Combining artificial intelligence (AI) with embedded cameras has paved the way for visual systems that can intelligently see and interact with their surroundings. Discover the role of AI in embedded camera applications, the benefits, and
Case History: Florim
This blog post was originally published by Onit. It is reprinted here with the permission of Onit. 80 Forklifts for Indoor and Outdoor Logistics Florim is a multinational company recognized in the production of ceramic surfaces. With an innate passion for beauty and design, Florim has been producing ceramic surfaces for every building, architecture and
The Future of Automotive Radar: Miniaturizing Size and Maximizing Performance
Radar has been one of the most significant additions to vehicles in the past two decades. It provides luxury advanced driver assistance system (ADAS) features like adaptive cruise control (ACC), as well as critical safety features like automatic emergency braking and blind spot detection. It has grown from an expensive accessory feature on the most
The Foundation Models Reshaping Computer Vision
This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. Learn about the Foundation Models — for object classification, object detection, and segmentation — that are redefining Computer Vision. Foundation models have come to computer vision! Initially limited to language tasks, foundation models can now serve as the backbone of computer
Deep Learning Models Which Pay Attention (Part I)
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. The attention mechanism made big changes in deep learning. Thanks to this, models can achieve better results. This mechanism was also the inspiration for perceivers and also transformer neural networks . And transformers led to the development
Embodied AI: How Do AI-powered Robots Perceive the World?
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. While robots have proliferated in recent years in smart cities, factories and homes, we are mostly interacting with robots controlled by classical handcrafted algorithms. These are robots that have a narrow goal and don’t learn from their
Using Synthetic Data to Address Novel Viewpoints for Autonomous Vehicle Perception
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Autonomous vehicles (AV) come in all shapes and sizes, ranging from small passenger cars to multi-axle semi-trucks. However, a perception algorithm deployed on these vehicles must be trained to handle similar situations, like avoiding an obstacle or
NVIDIA TAO Toolkit “Zero to Hero”: A Simple Guide for Model Comparison in Object Detection
This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In Part 2 of our NVIDIA TAO Toolkit series, we describe & address the common challenges of model deployment, in particular edge deployment. We explore practical solutions to these challenges, especially on the issues surrounding model comparison. Here
Why is Explaining Machine Learning Models Important?
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. Why is explaining machine learning models important? The main focus in machine learning projects is to optimize metrics like accuracy, precision, recall, etc. We put effort into hyper-parameter tuning or designing good data pre-processing. What if these
Free Webinar Explores How ISPs are Key to Optimizing Image Quality and Computer Vision Accuracy
On January 24, 2024 at 9 am PT (noon ET), Suresh Madhu, Head of Product Marketing, and Arun Asokan, Head of the ISP Division, both of e-Con Systems, will present the free hour webinar “Mastering Image Quality: The Power of Imaging Signal Processors in Embedded Vision,” organized by the Edge AI and Vision Alliance. Here’s
Modern Vehicles See More with Computer Vision
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. The Snapdragon Ride Vision System is designed to enhance vehicle perception for safer driving experiences Today’s drivers reap the benefits of active safety features in their vehicles. Automatic emergency braking, lane departure warnings, blind spot detection and
Heart Rate Detection with Open CV
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. Heart rate detection with Open CV It’s probably no surprise to you that heart rate can be measured using different gadgets like smartphones or smartwatches. But did you know you can measure it using just the camera
“Understanding, Selecting and Optimizing Object Detectors for Edge Applications,” a Presentation from Walmart Global Tech
Md Nasir Uddin Laskar, Staff Machine Learning Engineer at Walmart Global Tech, presents the “Understanding, Selecting and Optimizing Object Detectors for Edge Applications” tutorial at the May 2023 Embedded Vision Summit. Object detectors count objects in a scene and determine their precise locations, while also labeling them. Object detection plays… “Understanding, Selecting and Optimizing Object
Why is Air Quality Monitoring Going Mobile?
In recent years, there has been significant interest in the use of low-cost gas sensors affixed to lampposts, trees, and traffic lights within smart cities to monitor outdoor air quality. Yet before this industry has really taken off, there are already signs of a trend away from the adoption of expansive sensor networks towards mobile
“Vision-language Representations for Robotics,” a Presentation from the University of Pennsylvania
Dinesh Jayaraman, Assistant Professor at the University of Pennsylvania, presents the “Vision-language Representations for Robotics” tutorial at the May 2023 Embedded Vision Summit. In what format can an AI system best present what it “sees” in a visual scene to help robots accomplish tasks? This question has been a long-standing… “Vision-language Representations for Robotics,” a
Dragonfly Base: Enhancing Indoor Localization in Challenging Environments with Visual Markers
We’re thrilled to share the second episode of our Dragonfly video series. In this video, we delve deeper into Dragonfly’s capabilities, specifically focusing on how we enhance indoor localization under challenging conditions. Key Takeaways: The significance of visual markers in Computer Vision and Visual SLAM. Our ingenious solution—visual markers on the ceiling—ensuring consistent and accurate
“Introduction to Modern LiDAR for Machine Perception,” a Presentation from the University of Ottawa
Robert Laganière, Professor at the University of Ottawa and CEO of Sensor Cortek, presents the “Introduction to Modern LiDAR for Machine Perception” tutorial at the May 2023 Embedded Vision Summit. In this presentation, Laganière provides an introduction to light detection and ranging (LiDAR) technology. He explains how LiDAR sensors work… “Introduction to Modern LiDAR for