Automotive Applications for Embedded Vision
Vision products in automotive applications can make us better and safer drivers
Vision products in automotive applications can serve to enhance the driving experience by making us better and safer drivers through both driver and road monitoring.
Driver monitoring applications use computer vision to ensure that driver remains alert and awake while operating the vehicle. These systems can monitor head movement and body language for indications that the driver is drowsy, thus posing a threat to others on the road. They can also monitor for driver distraction behaviors such as texting, eating, etc., responding with a friendly reminder that encourages the driver to focus on the road instead.
In addition to monitoring activities occurring inside the vehicle, exterior applications such as lane departure warning systems can use video with lane detection algorithms to recognize the lane markings and road edges and estimate the position of the car within the lane. The driver can then be warned in cases of unintentional lane departure. Solutions exist to read roadside warning signs and to alert the driver if they are not heeded, as well as for collision mitigation, blind spot detection, park and reverse assist, self-parking vehicles and event-data recording.
Eventually, this technology will to lead cars with self-driving capability; Google, for example, is already testing prototypes. However many automotive industry experts believe that the goal of vision in vehicles is not so much to eliminate the driving experience but to just to make it safer, at least in the near term.
Exploring the Components of LiDAR
Automotive autonomy has triggered enormous interest in various sensors in assisting data collection for both vehicle and road information. Among them, three-dimensional (3D) light detection and ranging (Lidar), a remote sensing method that uses laser light to measure distances and create precise 3D maps of the surroundings, provides high angular resolution and long detection range
Modernizing Automotive Interfaces
This blog post was originally published at Avnet’s website. It is reprinted here with the permission of Avnet. The MCU is a low-power, flexible and highly integrated device. It has inspired many peripherals, from clocks to timers, as well as data conversion and power management. Most MCUs incorporate plenty of general-purpose interfaces (GPIO), along with
“Optimized Vision Language Models for Intelligent Transportation System Applications,” a Presentation from Nota AI
Tae-Ho Kim, Co-founder and CTO of Nota AI, presents the “Optimized Vision Language Models for Intelligent Transportation System Applications” tutorial at the May 2024 Embedded Vision Summit. In the rapidly evolving landscape of intelligent transportation systems (ITSs), the demand for efficient and reliable solutions has never been greater. In this… “Optimized Vision Language Models for
Dream Chip and Cadence Demo Automotive SoC Featuring Tensilica AI IP at embedded world 2024
Cadence verification and RTL-to-GDS digital full-flow tuned for automotive safety, quality and reliability requirements 18 Jun 2024 – At embedded world 2024, Cadence and Dream Chip demonstrated Dream Chip’s latest automotive SoC, which features the Cadence® Tensilica® Vision P6 DSP IP and Cadence design IP controllers and was taped out using the complete Cadence® Verification
Automotive LiDAR Deployment Ramps Up in 2024
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. More new car models with LiDAR were released in 2023 than the previous four years. Chinese players lead the game. Supplier market share in the dynamic automotive LiDAR space is changing as
e-con Systems Unveils New Robust All-weather Global Shutter Ethernet Camera for Outdoor Applications
Latest powerful addition to its RouteCAM series California & Chennai (June 12, 2024): e-con Systems, a global leader in embedded vision solutions, introduces a new Outdoor-Ready Global Shutter GigE camera — RouteCAM_CU25, the powerful addition to its high performance Ethernet camera series, RouteCAM. This Full HD Power over Ethernet (PoE) camera excels in delivering accurate
Leapmotor and Ambarella Announce Strategic Cooperation Agreement for Powerful Advanced Intelligent Driving Development
HANGZHOU, China and SANTA CLARA, Calif., June 11, 2024 — Leapmotor (HKEX: 09863), a technology-driven intelligent electric vehicle company with a full suite of R&D capabilities, and Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, recently signed a strategic cooperation agreement. The two companies will focus on creating a first-class intelligent driving experience for
Lattice Introduces Advanced 3D Sensor Fusion Reference Design for Autonomous Applications
HILLSBORO, Ore. – May 22, 2024 – Lattice Semiconductor (NASDAQ: LSCC), the low power programmable leader, today announced a new 3D sensor fusion reference design to accelerate advanced autonomous application development. Combining a low power, low latency, deterministic Lattice Avant™-E FPGA with Lumotive’s Light Control Metasurface (LCM™) programmable optical beamforming technology, the reference design enables
Ambarella’s Next-Gen AI SoCs for Fleet Dash Cams and Vehicle Gateways Enable Vision Language Models and Transformer Networks Without Fan Cooling
Two New 5nm SoCs Provide Industry-Leading AI Performance Per Watt, Uniquely Allowing Small Form Factor, Single Boxes With Vision Transformers and VLM Visual Analysis SANTA CLARA, Calif., May 21, 2024 — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced during AutoSens USA, the latest generation of its AI systems-on-chip (SoCs) for in-vehicle
Free Webinar Explores Processing Solutions for ADAS and Autonomous Vehicles
On July 24, 2024 at 9 am PT (noon ET), Ian Riches, Vice President of the Global Automotive Practice at TechInsights, will present the free hour webinar “Who is Winning the Battle for ADAS and Autonomous Vehicle Processing, and How Large is the Prize?,” organized by the Edge AI and Vision Alliance. Here’s the description,