fbpx

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella.

Vision-based AI requires outstanding video input in order to be effective.

Artificial intelligence solutions that rely on computer vision must begin with high-quality video. Nowhere is this truer than on the road.

When a vision-based system uses grainy, low-quality footage of traffic and pedestrians, any decisions it recommends are suspect. Any warnings it delivers are less reliable.

However, when the same system starts with high-quality imagery, its accuracy improves significantly. It is better equipped to successfully identify objects in the environment, evaluate complex scenarios, and make predictions as events unfold.

Its decision-making simply becomes better.

Nighttime image quality test with HDR processing.

Ambarella has been refining its own image processing pipeline—the unique combination of complex processes used to transform raw sensor data into pristine imagery—for nearly twenty years. During this time we’ve incorporated techniques learned through the development of best-selling cameras in market segments where image quality is paramount. Sports cameras like the original line of GoPro Hero devices. Drone cameras like the high-end Phantom series from DJI. Security cameras by Ring, Nest, Bosch, and Comcast, to name a few. Automotive drive recorders built into cars manufactured by Toyota, Ford, Honda, and many more.

“Ambarella has been refining its own image processing pipeline—the unique combination of complex processes used to transform raw sensor data into pristine imagery—for nearly twenty years.”

Although computer vision applications don’t “see” images in the same way humans do, this doesn’t diminish the importance of high-quality image processing techniques—if anything, these techniques become even more important, especially when lives are at stake on the roads. High dynamic range (HDR) processing, for example, allows ADAS systems to operate successfully in scenarios where a high degree of contrast creates perception challenges, such as when a vehicle emerges from a tunnel. Related techniques can be applied to help the same ADAS systems perform well in low-light environments, rain, snow, or fog.


Image illustrating the benefits of HDR processing in a tunnel scene.

A number of different parameters—HDR, color, tone mapping, sharpness filters, and more—can be tuned differently for artificial intelligence (i.e., sensing applications) or for human consumption (i.e., viewing applications), and our image processing pipeline has the flexibility required for both.

Regardless of the use case, our focus on quality is the same. And as we continue to gain valuable insights from our partnerships with some of the world’s leading ADAS software teams, we are continuously improving and refining our approach to ensure that the automotive solutions that use our processors—for drive recording, ADAS, DMS, intelligent electronic mirrors, and all levels of vehicular autonomy—receive the benefits of a robust, reliable, and field-tested imaging technology, honed and polished over decades.

How is our image processing pipeline different?

  • 120-dB (or more) dynamic range capability
  • Outstanding local tone mapping algorithm maintains detail in darkness/shadows while preserving required contrast
  • Wide frequency response range technology (support for 940-nm IR light processing) provides additional information when visible lighting is limited
    • Support for flexible pixel array schemes – e.g., RGGB, RCCB, RGBIR
  • Essential color accuracy via a flexible 3D mapping table
  • Robust de-noise and sharpening filter FIR design
  • Specific algorithms to detect and manage challenging environmental conditions:
    • Glare (e.g., from headlights)
    • Low light
    • Shadows
    • Fog
    • Lighting changes (e.g., emerging from a tunnel)
    • Dirty lenses
  • Low latency performance
    • Our CVflow® AI accelerator allows image processing to occur in parallel with computer vision processing, without impacting speed
  • Designed for both human vision (viewing) and computer vision (sensing) applications—even applications requiring both simultaneously, such as ADAS + recording and electronic mirrors with blindspot detection

Click here for more information on our computer vision chips for automotive.

For additional information regarding our image quality processing, please contact us.

DonDon (Wei-Yun) Cheng
Senior Software Engineer, Ambarella

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top