Automotive Applications for Embedded Vision
Vision products in automotive applications can make us better and safer drivers
Vision products in automotive applications can serve to enhance the driving experience by making us better and safer drivers through both driver and road monitoring.
Driver monitoring applications use computer vision to ensure that driver remains alert and awake while operating the vehicle. These systems can monitor head movement and body language for indications that the driver is drowsy, thus posing a threat to others on the road. They can also monitor for driver distraction behaviors such as texting, eating, etc., responding with a friendly reminder that encourages the driver to focus on the road instead.
In addition to monitoring activities occurring inside the vehicle, exterior applications such as lane departure warning systems can use video with lane detection algorithms to recognize the lane markings and road edges and estimate the position of the car within the lane. The driver can then be warned in cases of unintentional lane departure. Solutions exist to read roadside warning signs and to alert the driver if they are not heeded, as well as for collision mitigation, blind spot detection, park and reverse assist, self-parking vehicles and event-data recording.
Eventually, this technology will to lead cars with self-driving capability; Google, for example, is already testing prototypes. However many automotive industry experts believe that the goal of vision in vehicles is not so much to eliminate the driving experience but to just to make it safer, at least in the near term.
This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Starting with TensorRT 7.0, the Universal Framework Format (UFF) is being deprecated. In this post, you learn how to deploy TensorFlow trained deep learning models using the new TensorFlow-ONNX-TensorRT workflow. Figure 1 shows… Speeding Up Deep Learning Inference
“Market Analysis on SoCs for Imaging, Vision and Deep Learning in Automotive and Mobile Markets,” a Presentation from Yole Développement
John Lorenz, Market and Technology Analyst for Computing and Software at Yole Développement, delivers the presentation “Market Analysis on SoCs for Imaging, Vision and Deep Learning in Automotive and Mobile Markets” at the Edge AI and Vision Alliance’s March 2020 Vision Industry and Technology Forum. Lorenz presents Yole Développement’s latest… “Market Analysis on SoCs for
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Deep neural network takes a two-stage approach to address lidar processing challenges. Editor’s note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how
What’s New: The Institute of Electrical and Electronics Engineers (IEEE) has approved a proposal to develop a standard for safety considerations in automated vehicle (AV) decision-making and named Intel Senior Principal Engineer Jack Weast to lead the workgroup. Participation in the workgroup is open to companies across the AV industry, and Weast hopes for broad
“Improving the Safety and Performance of Automated Vehicles Through Precision Localization,” a Presentation from VSI Labs
Phil Magney, founder of VSI Labs, presents the “Improving the Safety and Performance of Automated Vehicles Through Precision Localization” tutorial at the May 2019 Embedded Vision Summit. How does a self-driving car know where it is? Magney explains how autonomous vehicles localize themselves against their surroundings through the use of a variety of sensors along
Gergely Debreczeni, Chief Scientist at AImotive, presents the “Distance Estimation Solutions for ADAS and Automated Driving” tutorial at the May 2019 Embedded Vision Summit. Distance estimation is at the heart of automotive driver assistance systems (ADAS) and automated driving (AD). Simply stated, safe operation of vehicles requires robust distance estimation. Many different types of sensors
“Can We Have Both Safety and Performance in AI for Autonomous Vehicles?,” a Presentation from Codeplay Software
Andrew Richards, CEO and Co-founder of Codeplay Software, presents the “Can We Have Both Safety and Performance in AI for Autonomous Vehicles?” tutorial at the May 2019 Embedded Vision Summit. The need for ensuring safety in AI subsystems within autonomous vehicles is obvious. How to achieve it is not. Standard safety engineering tools are designed
Tom Wilson, Vice President of Automotive at Graphcore, presents the “DNN Challenges and Approaches for L4/L5 Autonomous Vehicles” tutorial at the May 2019 Embedded Vision Summit. The industry has made great strides in development of L4/L5 autonomous vehicles, but what’s available today falls far short of expectations set as recently as two to three years
David Julian, CTO and Founder of Netradyne, presents the “Addressing Corner Cases in Embedded Computer Vision Applications” tutorial at the May 2019 Embedded Vision Summit. Many embedded vision applications require solutions that are robust in the face of very diverse real-world inputs. For example, in automotive applications, vision-based safety systems may encounter unusual configurations of
“What’s Changing in Autonomous Vehicle Investments Worldwide — and Why?,” a Presentation from Woodside Capital Partners
Rudy Burger, Managing Partner at Woodside Capital Partners, presents the "What’s Changing in Autonomous Vehicle Investments Worldwide—and Why?" tutorial at the May 2019 Embedded Vision Summit. So far, over $100B has been invested by industry into the development of autonomous vehicles (AVs), and the pace of investment has recently accelerated. In this talk, Burger presents