Automotive Applications for Embedded Vision
Vision products in automotive applications can make us better and safer drivers
Vision products in automotive applications can serve to enhance the driving experience by making us better and safer drivers through both driver and road monitoring.
Driver monitoring applications use computer vision to ensure that driver remains alert and awake while operating the vehicle. These systems can monitor head movement and body language for indications that the driver is drowsy, thus posing a threat to others on the road. They can also monitor for driver distraction behaviors such as texting, eating, etc., responding with a friendly reminder that encourages the driver to focus on the road instead.
In addition to monitoring activities occurring inside the vehicle, exterior applications such as lane departure warning systems can use video with lane detection algorithms to recognize the lane markings and road edges and estimate the position of the car within the lane. The driver can then be warned in cases of unintentional lane departure. Solutions exist to read roadside warning signs and to alert the driver if they are not heeded, as well as for collision mitigation, blind spot detection, park and reverse assist, self-parking vehicles and event-data recording.
Eventually, this technology will to lead cars with self-driving capability; Google, for example, is already testing prototypes. However many automotive industry experts believe that the goal of vision in vehicles is not so much to eliminate the driving experience but to just to make it safer, at least in the near term.
The ability to "stitch" together (offline or in real-time) multiple images taken simultaneously by multiple cameras and/or sequentially by a single camera, in both cases capturing varying viewpoints of a scene, is becoming an increasingly appealing (if not necessary) capability in an expanding variety of applications. High quality of results is a critical requirement, one
This technical article was originally published on Texas Instruments' website (PDF). It is reprinted here with the permission of Texas Instruments. Introduction Cameras are the most precise mechanisms used to capture accurate data at high resolution. Like human eyes, cameras capture the resolution, minutiae and vividness of a scene with such beautiful detail that no
Minyoung Kim, Senior Research Engineer at Panasonic Silicon Valley Laboratory, presents the "A Fast Object Detector for ADAS using Deep Learning" tutorial at the May 2017 Embedded Vision Summit. Object detection has been one of the most important research areas in computer vision for decades. Recently, deep neural networks (DNNs) have led to significant improvement
Luca Rigazio, Director of Engineering for the Panasonic Silicon Valley Laboratory, presents the "Unsupervised Everything" tutorial at the May 2017 Embedded Vision Summit. The large amount of multi-sensory data available for autonomous intelligent systems is just astounding. The power of deep architectures to model these practically unlimited datasets is limited by only two factors: computational
“Designing a Vision-based, Solar-powered Rear Collision Warning System,” a Presentation from Pearl Automation
Aman Sikka, Vision System Architect at Pearl Automation, presents the "Designing a Vision-based, Solar-powered Rear Collision Warning System" tutorial at the May 2017 Embedded Vision Summit. Bringing vision algorithms into mass production requires carefully balancing trade-offs between accuracy, performance, usability, and system resources. In this talk, Sikka describes the vision algorithms along with the system
“Collaboratively Benchmarking and Optimizing Deep Learning Implementations,” a Presentation from General Motors
Unmesh Bordoloi, Senior Researcher at General Motors, presents the "Collaboratively Benchmarking and Optimizing Deep Learning Implementations" tutorial at the May 2017 Embedded Vision Summit. For car manufacturers and other OEMs, selecting the right processors to run deep learning inference for embedded vision applications is a critical but daunting task. One challenge is the vast number
Jayachandra Dakala, Technical Architect at PathPartner Technology, presents the "Approaches for Vision-based Driver Monitoring" tutorial at the May 2017 Embedded Vision Summit. Since many road accidents are caused by driver inattention, assessing driver attention is important to preventing accidents. Distraction caused by other activities and sleepiness due to fatigue are the main causes of driver
“Automakers at a Crossroads: How Embedded Vision and Autonomy Will Reshape the Industry,” a Presentation from Lux Research
Mark Bünger, VP of Research at Lux Research, presents the "Automakers at a Crossroads: How Embedded Vision and Autonomy Will Reshape the Industry" tutorial at the May 2017 Embedded Vision Summit. The auto and telecom industries have been dreaming of connected cars for twenty years, but their results have been mediocre and mixed. Now, just
“Computer-vision-based 360-degree Video Systems: Architectures, Algorithms and Trade-offs,” a Presentation from videantis
Marco Jacobs, VP of Marketing at videantis, presents the "Computer-vision-based 360-degree Video Systems: Architectures, Algorithms and Trade-offs" tutorial at the May 2017 Embedded Vision Summit. 360-degree video systems use multiple cameras to capture a complete view of their surroundings. These systems are being adopted in cars, drones, virtual reality, and online streaming systems. At first
“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors
Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the "Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles" tutorial at the May 2017 Embedded Vision Summit. A diverse set of sensor technologies is available and emerging to provide vehicle autonomy or driver assistance. These sensor technologies often