Automotive Applications for Embedded Vision
Vision products in automotive applications can make us better and safer drivers
Vision products in automotive applications can serve to enhance the driving experience by making us better and safer drivers through both driver and road monitoring.
Driver monitoring applications use computer vision to ensure that driver remains alert and awake while operating the vehicle. These systems can monitor head movement and body language for indications that the driver is drowsy, thus posing a threat to others on the road. They can also monitor for driver distraction behaviors such as texting, eating, etc., responding with a friendly reminder that encourages the driver to focus on the road instead.
In addition to monitoring activities occurring inside the vehicle, exterior applications such as lane departure warning systems can use video with lane detection algorithms to recognize the lane markings and road edges and estimate the position of the car within the lane. The driver can then be warned in cases of unintentional lane departure. Solutions exist to read roadside warning signs and to alert the driver if they are not heeded, as well as for collision mitigation, blind spot detection, park and reverse assist, self-parking vehicles and event-data recording.
Eventually, this technology will to lead cars with self-driving capability; Google, for example, is already testing prototypes. However many automotive industry experts believe that the goal of vision in vehicles is not so much to eliminate the driving experience but to just to make it safer, at least in the near term.

AI-enhanced In-cabin Sensing Systems
As the trend of vehicle intelligence enhancement rises, in-cabin sensing systems will be largely responsible for increased communication, sensitivity, and smart features within cars. IDTechEx‘s report, “In-Cabin Sensing 2025-2035: Technologies, Opportunities, and Markets“, provides the latest technology developments within the sector, along with forecasts for their uptake over the next ten years. Where AI meets

D3 Embedded, HTEC, Texas Instruments and Tobii Pioneer the Integration of Single-camera and Radar Interior Sensor Fusion for In-cabin Sensing
The companies joined forces to develop sensor fusion based interior sensing for enhanced vehicle safety, launching at the InCabin Europe conference on October 7-9. Rochester, NY – October 6, 2025 – Tobii, with its automotive interior sensing branch Tobii Autosense, together with D3 Embedded, and HTEC today announced the development of an interior sensing solution

How Do Speed Cameras Make the Roads Safer?
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Speed cameras play a crucial role in promoting safer roads. They can detect, capture, and record instances of speeding. Get insights into how these cameras work and their major road safety use cases. Managing road

STMicroelectronics and Tobii Enter Mass Production of Breakthrough Interior Sensing Technology
Starting mass production of an advanced interior sensing system for a premium European carmaker for enhanced driver and passenger monitoring Cost-effective single-camera solution combines Tobii’s interior-sensing technology and ST’s imaging sensors to deliver wide-angle, high-quality imaging in daytime and nighttime environments Stockholm, Sweden; Geneva, Switzerland – October 2, 2025 — Tobii, the global leader in

How e-con Systems Built a Multi-camera Solution for Large Mining Vehicles to Eliminate Blind Spots
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Large mining vehicles tend to operate in unpredictable terrains. They often face issues like poor lighting or heavy dust. Find out how e-con Systems helped a client select and deploy a multi-camera solution for their

Semiconductors at the Heart of Automotive’s Next Chapter
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Automotive White Paper, Vol.2, Powered by Yole Group – Shifting gears! KEY TAKEAWAYS The automotive semiconductor market will soar from $68 billion in 2024 to $132 billion in 2030, growing at a

PerCV.ai: How a Vision AI Platform and the STM32N6 can Turn Around an 80% Failure Rate for AI Projects
This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics. The vision AI platform PerCV.ai (pronounced Perceive AI), could be the secret weapon that enables a company to deploy an AI application when so many others fail. The solution from Irida Labs, a member of the ST

Automated Driving for All: Snapdragon Ride Pilot System Brings State-of-the-art Safety and Comfort Features to Drivers Across the Globe
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Qualcomm Technologies, Inc. introduces Snapdragon Ride Pilot at IAA Mobility 2025 What you should know: Qualcomm Technologies, Inc. has introduced Snapdragon Ride Pilot to help make driving more safety-focused and convenient for people around the world. Features

Qualcomm and Google Cloud Deepen Collaboration to Bring Agentic AI Experiences to the Auto Industry
Highlights: Landmark technical collaboration brings together the strengths of two industry leaders with Google Gemini models and Qualcomm Snapdragon Digital Chassis solutions to help automakers create deeply personalized and advanced AI agents that will redefine customers’ experiences at every point in their journeys. Combines the best of both worlds – powerful on-device AI for instant,

Accelerate Autonomous Vehicle Development with the NVIDIA DRIVE AGX Thor Developer Kit
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Autonomous vehicle (AV) technology is rapidly evolving, fueled by ever-larger and more complex AI models deployed at the edge. Modern vehicles now require not only advanced perception and sensor fusion, but also end-to-end deep learning pipelines that