fbpx

Embedded Vision Q&A: Lattice Semiconductor’s Perspectives

Lattice_Logo-01

The following Q&A session, with questions and answers both authored by Lattice Semiconductor, provides the company's perspective on various embedded vision topics and trends.

How have technological advancements accelerated the development of intelligent, vision-enabled devices at the Edge?

Many of the key components and tools crucial to the rapid deployment of embedded vision solutions have finally emerged. Now, designers can choose from a wide range of lower cost processors and programmable logic devices capable of delivering higher performance in a compact footprint, all while consuming minimal power. At the same time, thanks to the rapidly growing mobile market, designers are benefiting from the proliferation of cameras and sensors. In the meantime, improvements in software and hardware tools are helping to simplify development and shorten time to market.

Not so long ago, embedded vision technology was largely limited by component performance restrictions. Many of the key components in an intelligent vision solution, particularly the compute engine needed to process HD digital video in real-time, were simply not available at a reasonable cost. Those days are gone. Leveraging advances in mobile processors, the emergence of low power FPGAs and ASSPs, the widespread adoption of MIPI interface standards, and the proliferation of low cost cameras and sensors, designers have turned what was once a highly specialized technology into a mainstream component in smart factory automation, automotive electronics and consumer applications.

The key elements in an intelligent vision system typically feature high-performance compute engines capable of processing HD digital video streams in real-time, high capacity solid state storage, smart cameras or sensors, and advanced analytic algorithms. Processors in these systems can perform a wide range of functions from image acquisition, lens correction and image pre-processing, to segmentation, object analysis and heuristics. Designers of embedded vision systems employ a wide range of processor types including general purpose CPUs, Graphics Processing Units (GPUs), Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs) and Application Specific Standard Products (ASSPs) designed specifically for vision applications.

Each of these processor architectures offers distinct advantages and challenges. In many cases, designers combine multiple processor types in a heterogeneous computing environment. Other times, the processors are integrated into a single component. Moreover, some processors use dedicated hardware to maximize performance on vision algorithms. Programmable platforms such as FPGAs offer designers both a highly parallel architecture for compute-intensive applications and the ability to serve other purposes such as expand I/O resources.

How have mobile technologies influenced market conditions for embedded vision systems?

Three recent developments promise to radically change market conditions for embedded vision systems. First, the rapid development of the mobile market has given embedded vision designers a wide selection of processors that deliver relatively high performance at low power. Second, the recent success of the Mobile Industry Processor Interface (MIPI) specified by the MIPI Alliance, offers designers effective alternatives, using compliant hardware and software components to build innovative and cost-effective embedded vision solutions. Lastly, the proliferation of low-cost sensors and cameras for mobile applications has also helped embedded vision system designers drive implementation up and cost down.

What are some of the new commercial applications for embedded vision?

Machine Vision

One of the promising applications for embedded vision is in industrial arena for machine vision systems. Machine vision technology is one of the more mature and higher volume application for embedded vision. As a result, it is widely used in the manufacturing process and quality management applications. Typically, in these applications manufacturers use compact vision systems that combine one or more smart cameras with a processor module. And with the move towards Industry 4.0, designers are finding a seemingly endless array of new embedded vision applications to support the interoperability and communication between machines, devices, sensors and people.

Automotive

The automotive market offers a high growth potential for embedded vision applications. The introduction of Advanced Driver Assistance Systems (ADAS) and infotainment features were just the beginning. Sensors and multi-camera are being utilized in many new ways to improve the driver experience. One emerging automotive application is a driver monitoring system which uses vision to track driver head and body movement to identify fatigue. Another one is a vision system that can monitor potential driver distractions, such as texting or eating, increasing vehicle operational safety.

But vision systems in cars can do far more than monitor what happens inside the vehicle. Starting in 2018, regulations will require that new cars must feature back-up cameras to help drivers see behind the car. And new applications, such as lane departure warning systems, combine video with lane detection algorithms to estimate the position of the car. In addition, demand is building for features that read warning signs, mitigate collisions, offer blind spot detection and automatically handle parking and park reverse assistance. All of these features promise to make driving safer and are required to make decisions right at the edge.

Together, advances in vision and sensor systems for automobiles are laying the groundwork for the development of true autonomous driving capabilities. In 2018, for example, Cadillac will integrate a number of embedded vision subsystems into its CT6 sedan to deliver SuperCruise, one of the industry’s first hands-free driving technologies. This new technology will make driving safer by continuously analyzing both, the driver and the road, while a precision LIDAR database provides details of the road and advanced cameras, sensors and GPS react in real-time to dynamic roadway conditions. Overall, auto makers are already anticipating that ADAS for modern vehicles will require forward facing cameras for lane detect, pedestrian detect, traffic sign recognition and emergency braking. Side- and rear-facing cameras will be needed to support parking assistance, blind spot detection and cross traffic alerts functions.

One challenge that auto manufacturers face is the limited number and type of I/Os in existing electronic devices. Typically, processors today feature two camera interfaces. Yet many ADAS systems require as many as eight cameras to meet image quality requirements. Designers need a solution that gives them the co-processing resources to stitch together multiple video streams from multiple cameras or perform image processing functions such as white balance, fish-eye correction and defogging, on the camera inputs and pass the data to the Application Processor (AP) in a single stream. For example, many auto manufacturers offer as part of their ADAS system a bird’s-eye view capability that gives the driver a live video view from 20 feet above the car looking down. The ADAS system accomplishes this by stitching together data from four or more cameras with a wide Field-of-View (FoV).

Historically designers have used a single processor to drive each display. Instead, designers can now use a single FPGA to replace multiple processors, aggregate all the camera data, stitch the images together, perform pre- and post-processing and send the image to the system processor.

Consumer

Drones, Augmented Reality/Virtual Reality (AR/VR) and other consumer applications offer tremendous opportunities for developers of embedded vision solutions. Today, drone designers are finding it cheaper to synchronize six or more cameras on a drone to create a panoramic view than build a mechanical solution that takes two cameras and moves them 180 degrees. Similarly, AR/VR designers are rapidly converting video from a single video stream and splitting the content to a dual-display. They make use of low-cost, mobile-influenced technology which uses two MIPI DSI displays, one for each eye, providing low latency performance requiring only minimal power consumption, enhance depth perception and offer the user a more immersive experience.

Deepak Boppana
Senior Director of Product and Segment Marketing, Lattice Semiconductor

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top