fbpx

Automotive Applications for Embedded Vision

Vision products in automotive applications can make us better and safer drivers

Vision products in automotive applications can serve to enhance the driving experience by making us better and safer drivers through both driver and road monitoring.

Driver monitoring applications use computer vision to ensure that driver remains alert and awake while operating the vehicle. These systems can monitor head movement and body language for indications that the driver is drowsy, thus posing a threat to others on the road. They can also monitor for driver distraction behaviors such as texting, eating, etc., responding with a friendly reminder that encourages the driver to focus on the road instead.

In addition to monitoring activities occurring inside the vehicle, exterior applications such as lane departure warning systems can use video with lane detection algorithms to recognize the lane markings and road edges and estimate the position of the car within the lane. The driver can then be warned in cases of unintentional lane departure. Solutions exist to read roadside warning signs and to alert the driver if they are not heeded, as well as for collision mitigation, blind spot detection, park and reverse assist, self-parking vehicles and event-data recording.

Eventually, this technology will to lead cars with self-driving capability; Google, for example, is already testing prototypes. However many automotive industry experts believe that the goal of vision in vehicles is not so much to eliminate the driving experience but to just to make it safer, at least in the near term.

Computer Vision in Surround View Applications

The ability to "stitch" together (offline or in real-time) multiple images taken simultaneously by multiple cameras and/or sequentially by a single camera, in both cases capturing varying viewpoints of a scene, is becoming an increasingly appealing (if not necessary) capability in an expanding variety of applications. High quality of results is a critical requirement, one

Read More »

“Unsupervised Everything,” a Presentation from Panasonic

Luca Rigazio, Director of Engineering for the Panasonic Silicon Valley Laboratory, presents the "Unsupervised Everything" tutorial at the May 2017 Embedded Vision Summit. The large amount of multi-sensory data available for autonomous intelligent systems is just astounding. The power of deep architectures to model these practically unlimited datasets is limited by only two factors: computational

Read More »

“Designing a Vision-based, Solar-powered Rear Collision Warning System,” a Presentation from Pearl Automation

Aman Sikka, Vision System Architect at Pearl Automation, presents the "Designing a Vision-based, Solar-powered Rear Collision Warning System" tutorial at the May 2017 Embedded Vision Summit. Bringing vision algorithms into mass production requires carefully balancing trade-offs between accuracy, performance, usability, and system resources. In this talk, Sikka describes the vision algorithms along with the system

Read More »

“Collaboratively Benchmarking and Optimizing Deep Learning Implementations,” a Presentation from General Motors

Unmesh Bordoloi, Senior Researcher at General Motors, presents the "Collaboratively Benchmarking and Optimizing Deep Learning Implementations" tutorial at the May 2017 Embedded Vision Summit. For car manufacturers and other OEMs, selecting the right processors to run deep learning inference for embedded vision applications is a critical but daunting task.  One challenge is the vast number

Read More »

“Approaches for Vision-based Driver Monitoring,” a Presentation from PathPartner Technology

Jayachandra Dakala, Technical Architect at PathPartner Technology, presents the "Approaches for Vision-based Driver Monitoring" tutorial at the May 2017 Embedded Vision Summit. Since many road accidents are caused by driver inattention, assessing driver attention is important to preventing accidents. Distraction caused by other activities and sleepiness due to fatigue are the main causes of driver

Read More »

“Computer-vision-based 360-degree Video Systems: Architectures, Algorithms and Trade-offs,” a Presentation from videantis

Marco Jacobs, VP of Marketing at videantis, presents the "Computer-vision-based 360-degree Video Systems: Architectures, Algorithms and Trade-offs" tutorial at the May 2017 Embedded Vision Summit. 360-degree video systems use multiple cameras to capture a complete view of their surroundings. These systems are being adopted in cars, drones, virtual reality, and online streaming systems. At first

Read More »

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the "Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles" tutorial at the May 2017 Embedded Vision Summit. A diverse set of sensor technologies is available and emerging to provide vehicle autonomy or driver assistance. These sensor technologies often

Read More »
logo_2020

May 18 - 21, Santa Clara, California

The preeminent event for practical, deployable computer vision and visual AI, for product creators who want to bring visual intelligence to products.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top //