Some of you may remember that last September, the week before the premier Embedded Vision Summit, BDTI and the Embedded Vision Alliance delivered a free five-part embedded vision webcast series in partnership with Design News Magazine. Two weeks from now (i.e. March 18-22, each day at 2PM ET/11AM PT), Design News and the Alliance will offer another five-day online tutorial sequence entitled "Implementing Embedded Vision: Designing Systems That See & Understand Their Environments," covering a fresh set of topics and delivered by an expanded set of Alliance member company representatives.
As before, attendance at the entire five-part series is encouraged, and in-advance registration is necessary; note that separate registration for each session is required. See below for session details, along with relevant registration page links.
March 18: "What Can You Do With Embedded Vision?" (Jeff Bier, Embedded Vision Alliance)
Embedded vision is the incorporation of computer vision techniques into embedded systems, mobile devices, PCs, and the cloud. In this session, we’ll look at some of the coolest new applications of embedded vision, such as systems that read a person’s emotional state from facial images and systems that help prevent driving accidents by monitoring the road. We’ll touch on the algorithms that enable these capabilities and the types of processors used to run those algorithms.
March 19: "Interfacing to and Processing Data From Image Sensors" (José Alvarez, Xilinx)
Image sensors use varied hardware interfaces and output data formats, which can complicate system design and make it difficult to switch sensors. Their high output rate can overwhelm data connections and processors. Programmable logic devices can solve both problems: Their flexibility can comprehend normally incompatible interfaces, and they can accelerate common functions like color space conversion, image resizing, frame rate transformation, aspect ratio alteration, and edge detection.
March 20: "Improving Image Understanding by Improving Image Quality" (Michael Tusch, Apical)
Cameras typically apply preprocessing algorithms to raw pixel data to generate pleasant images by compressing dynamic range. In embedded vision systems, image preprocessing is also important. In this case the goal is to create, not just an attractive image, but one that enhances the ability of downstream algorithms to extract meaning from images. We’ll discuss how appropriate image preprocessing can ease the work of image-understanding algorithms, and how these algorithms can assist in preprocessing.
March 21: "When to Use FPGAs to Accelerate Embedded Vision Applications" (Daniel Wilding, National Instruments)
FPGAs can accelerate some image processing algorithms, while reducing latency and jitter compared to using CPUs. We’ll compare CPUs and FPGAs as embedded vision processing engines, exploring which types of vision algorithms and applications can benefit from implementation on an FPGA, and which are better suited for a CPU or other type of processor. We’ll share benchmark results comparing FPGA and CPU implementations of vision applications, and introduce high-level programming of FPGAs.
March 22: "Developing Low-Cost, Low-Power, Small Vision Systems" (Simon Morris, CogniVue)
We’ll present a detailed case study of the development of a smart, automotive, rear-view camera system incorporating vision-based object detection and distance estimation. We’ll discuss the challenges associated with creating an embedded vision system that meets very demanding cost, size, power, and performance requirements. We’ll present the lessons learned during algorithm, software, and system development, and how those lessons apply to other embedded vision applications.