Delivering the processing performance required by computer vision applications (typically tens of billions of operations per second) with cost and power consumption appropriate for mass-market products is a tough challenge. Just in the past few months, several new vision-specific co-processors, offered as licensable cores for incorporation into chips, have been announced, joining multiple previously introduced vision cores and ICs. In addition, many vision system developers use other types of parallel co-processors, such as GPUs, DSPs and FPGAs.
This wealth of vision processor options is great news for chip and system designers, because a range of application-optimized processor choices makes it more likely that you'll be able to find a processor that fits your specific needs. At the same time, the large number of diverse processor options, and the rapid pace of new options being introduced, can make it difficult to make the best selection decision. Multiple Embedded Vision Summit presentations and other events strive to simplify this task:
- Jeff Bier, Embedded Vision Alliance founder and President of BDTI, will present the talk "Choosing a Processor for Embedded Vision: Options and Trends." Bier will highlight strengths and weaknesses of different processor types, as well as illuminate important trends in processors and associated development tools.
- In "Understanding the Role of Integrated GPUs in Vision Applications," Roberto Mijat, Visual Computing Marketing Manager at ARM, will explore when it makes sense to utilize the GPU as a coprocessor for computer vision algorithms, what to expect from the GPU, and other key considerations.
- Chris Rowen, Fellow at Cadence Design Systems, will speak about "Designing and Selecting Instruction Sets for Vision." Rowen's talk will serve as a step-by-step tutorial on how to dissect vision applications, extract key requirements, and determine processor instruction set and memory organization priorities.
- And two of the workshops taking place in conjunction with the Summit, both on Monday afternoon, May 11 and free of charge, will also focus on processor topics. "Enabling Computer Vision on ARM" will encompass presentations from multiple computer vision experts, who will share their experiences working with ARM-based systems across a variety of real use cases. And "Using the DesignWare Embedded Vision Processor for Low-Power, Low-Cost Video Surveillance and Object Detection Applications" will educate you on the features of Synopsys’ new embedded vision processor core, in the context of several detailed case studies.
In addition to these vision processor-focused presentations and workshops, the Embedded Vision Summit includes 21 other presentations by vision technology, application and market experts, along with a keynote talks from Mike Aldred of Dyson and Dr. Ren Wu of Baidu, two additional workshops, and more than thirty demos by leading vision technology suppliers. The Embedded Vision Summit takes place on May 12, 2015 at the Santa Clara (California) Convention Center. Half- and full-day workshops will be presented on May 11 and 13. Register today, while space is still available!