Object Identification Functions
The Impact of AI on the Automotive Industry
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. The automotive industry has undergone a profound transformation over the years, with technological advancements driving innovation at an unprecedented pace. One of the most influential technologies shaping the future of vehicles is Artificial Intelligence. AI’s integration into
“Vision Language Models for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio
Carter Maslan, CEO of Camio, presents the “Vision Language Models for Regulatory Compliance, Quality Control and Safety Applications” tutorial at the December 2024 Edge AI and Vision Innovation Forum. In this presentation, you’ll learn how vision language models interpret policy text to enable much more sophisticated understanding of scenes and human behavior compared with current-generation
LG and Ambarella Join Forces to Advance AI-driven In-cabin Vehicle Safety Solutions
LG Sets New Standard in Accuracy and Reliability for In-Cabin Solutions with Ambarella-Powered Driver Monitoring System; Plans Demo During CES 2025 SEOUL, Korea and SANTA CLARA, Calif., Dec. 4, 2024 — LG Electronics (LG), a mobility sector technology leader, and Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced that LG will showcase
Passenger Detection and Nighttime Safety – Exploring Infrared Technology
ADAS braking systems, night vision, and driver monitoring can all be enhanced using the infrared spectrum. IDTechEx‘s latest report, “Infrared (IR) Cameras for Automotive 2025-2035: Technologies, Opportunities, Forecasts“, explores long-wave infrared (LWIR), short-wave infrared (SWIR), and near-infrared (NIR) sensors as means to increase the safety and protection of road users. Long-wave infrared and autonomous possibilities
The Unseen Cost of Low Quality Large Datasets
This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. Your current data selection process may be limiting your models. Massive datasets come with obvious storage and compute costs. But the two biggest challenges are often hidden: Money and Time. With increasing data volumes, companies have a
Why You Don’t Need Two Separate Cameras for RGB and IR Imaging in Remote Patient Monitoring
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. This blog post explores how RGB-IR cameras simplify remote patient monitoring (RPM) by eliminating the need for separate day and night cameras. It highlights their benefits, including compact design, reduced power consumption, and simultaneous RGB
Top 4 Computer Vision Problems & Solutions in Agriculture — Part 2
This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In Part 1 of this series we introduced you with the top 4 issues you are likely to encounter in agriculture related datasets for object detection: occlusion, label quality, data imbalance and scale variation. In Part 2
Synthetic Data is Revolutionizing Sensor Tech: Real Results from Virtual Worlds
This blog post was originally published at Geisel Software’s website. It is reprinted here with the permission of Geisel Software. Imagine you’re a developer on your first day at a new job. You’re handed a state-of-the-art sensor designed to capture data for an autonomous vehicle. The excitement quickly turns to anxiety as you realize the
Top 4 Computer Vision Problems & Solutions in Agriculture — Part 1
This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In Part 1 of this series, we highlight the 4 main issues you are likely to encounter in object detection datasets in agriculture. We begin by summarizing the challenges of applying AI to crop monitoring and yield
What is Depth of Field and Its Relevance in Embedded Vision?
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Depth of Field (DoF) is crucial for embedded vision since it can improve the ability to process and analyze visual data. It is impacted by factors such as aperture size, focal length, and more. Get
Give AI a Look: Any Industry Can Now Search and Summarize Vast Volumes of Visual Data
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Accenture, Dell Technologies and Lenovo are among the companies tapping a new NVIDIA AI Blueprint to develop visual AI agents that can boost productivity, optimize processes and create safer spaces. Enterprises and public sector organizations around the
“Embedded Vision Opportunities and Challenges in Retail Checkout,” an Interview with Zebra Technologies
Anatoly Kotlarsky, Distinguished Member of the Technical Staff in R&D at Zebra Technologies, talks with Phil Lapsley, Co-Founder and Vice President of BDTI and Vice President of Business Development at the Edge AI and Vision Alliance, for the “Embedded Vision Opportunities and Challenges in Retail Checkout” interview at the May… “Embedded Vision Opportunities and Challenges
“Cost-efficient, High-quality AI for Consumer-grade Smart Home Cameras,” a Presentation from Wyze
Lin Chen, Chief Scientist at Wyze, presents the “Cost-efficient, High-quality AI for Consumer-grade Smart Home Cameras” tutorial at the May 2024 Embedded Vision Summit. In this talk, Chen explains how Wyze delivers robust visual AI at ultra-low cost for millions of consumer smart cameras, and how his company is rapidly… “Cost-efficient, High-quality AI for Consumer-grade
Optimizing the CV Pipeline in Automotive Vehicle Development Using the PVA Engine
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. In the field of automotive vehicle software development, more large-scale AI models are being integrated into autonomous vehicles. The models range from vision AI models to end-to-end AI models for autonomous driving. Now the demand for computing
“Multi-object Tracking Systems,” a Presentation from Tryolabs
Javier Berneche, Senior Machine Learning Engineer at Tryolabs, presents the “Multiple Object Tracking Systems” tutorial at the May 2024 Embedded Vision Summit. Object tracking is an essential capability in many computer vision systems, including applications in fields such as traffic control, self-driving vehicles, sports and more. In this talk, Berneche… “Multi-object Tracking Systems,” a Presentation
“Improved Navigation Assistance for the Blind via Real-time Edge AI,” a Presentation from Tesla
Aishwarya Jadhav, Software Engineer in the Autopilot AI Team at Tesla, presents the “Improved Navigation Assistance for the Blind via Real-time Edge AI,” tutorial at the May 2024 Embedded Vision Summit. In this talk, Jadhav presents recent work on AI Guide Dog, a groundbreaking research project aimed at providing navigation… “Improved Navigation Assistance for the
“Introduction to Modern Radar for Machine Perception,” a Presentation from Sensor Cortek
Robert Laganière, Professor at the University of Ottawa and CEO of Sensor Cortek, presents the “Introduction to Modern Radar for Machine Perception” tutorial at the May 2024 Embedded Vision Summit. In this presentation, Laganière provides an introduction to radar (short for radio detection and ranging) for machine perception. Radar is… “Introduction to Modern Radar for
2023 Milestone: More than 760,000 LiDAR Systems in Passenger Cars. Which Technology Leads the Market?
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Hesai, Seyond, RoboSense, and Valeo: Yole Group’s analysts invite you to dive deep into leading LiDAR tech. OUTLINE: LiDAR can achieve an angular resolution of 0.05°, offering more precise object detection and