Dear Colleague,

I'm admittedly feeling pretty good right now. That's because I've just reviewed the attendee feedback from the late-March Embedded Vision Alliance Summit, and the strong ratings and positive supporting comments combine to confirm my gut feel that it was an extremely valuable event for Alliance members and press-and-analyst attendees alike. To the latter point, in the previous (April 3) newsletter I linked to Summit coverage from Rick Merritt at EE Times and Dean Takahashi at VentureBeat. Since then, Takahashi has published a second writeup on the event ("Military Wants Better Machine Vision for Smarter Robot Cameras") and has been joined by Kevin Morris of Electronic Engineering Journal ("Envisioning the Future: Embedded Vision Lunges Forward"). Based on conversations with other analysts and technology journalists, I anticipate more coverage to come; keep an eye out on the Embedded Vision Alliance's Facebook and LinkedIn pages and Twitter feed for alerts when the material is published.

Putting together a Summit is a lot of work, and I'm admittedly tempted after each one to temporarily throttle back and coast for a bit. But if anything, the pace has accelerated in the past few weeks. In the last newsletter, I mentioned that Analog Devices had chosen the Summit as a forum both for introducing a series of embedded-vision-tailored Blackfin SoCs and to upgrade its Alliance membership to the premier Platinum tier. The day before the Summit I spoke with Colin Duggan, ADI's director of marketing, about these and other embedded vision topics, and you'll find a link to the video of our interview below. Newly published to the site, too, is a video demonstration by Navanee Sundaramoorthy, Xilinx product manager, of the compelling capabilities of the Zynq-7000 Extensible Processing Platform (containing both a dual-core ARM Cortex-A9 CPU and FPGA fabric) as an embedded vision processor.

You'll find plenty of new written content on the site, too. Two Texas Instruments engineers have, for example, developed a detailed white paper on sensor, processor and software alternatives for implementing rich gesture interfaces. Michael Tusch, founder and CEO of Alliance member Apical Limited, has just published the third article in his series on image quality optimization, this one discussing various HDR (high dynamic range) sensor and algorithm techniques. I have, as usual, been writing regular installments on the latest embedded vision industry breaking news. And there's much more compelling content to come in the near future.

The video of Jim Donlon's (DARPA) Summit keynote, "The Way Ahead for Visual Intelligence," is nearly completed, for example. And it'll be shortly followed by videos of other events from that day:

  • My introductory embedded vision presentation to the press and analyst attendees
  • The panel discussion, "Beyond Kinect; from Research to Revenue", moderated by Embedded Vision Alliance founder Jeff Bier, with Donlon and representatives from Analog Devices (Duggan), Texas Instruments (Bruce Flinchbaugh) and Xilinx (Bruce Kleinman) participating
  • The market trends presentation "Embedded Vision Markets in 2012 and Beyond: Established, Developing and Emerging" by Tom Hackenberg, semiconductor research manager at IMS Research
  • The technology trends presentation, "Image Sensor Technologies for Embedded Vision," by BDTI senior engineers Eric Gregori and Shehrzad Qureshi
  • Member product announcements by Analog Devices, Apical Limited, Omek Interactive, Texas Instruments and Xilinx, and
  • An online slideshow of various snapshots taken throughout the day

Keep an eye out on the Embedded Vision Alliance website for this and other upcoming material, and thanks for your support of the Alliance and your interest and involvement in embedded vision technologies, products and applications. As always, I welcome your feedback on how the Alliance, its website and this newsletter can do a better job of serving your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance


Embedded Vision Alliance Conversation with Colin Duggan of Analog Devices
Brian Dipert interviews Colin Duggan, Analog Devices Director of Marketing. Brian and Colin discuss Analog Devices' newly announced and embedded vision-optimized Blackfin SoCs and the company's recent Embedded Vision Alliance membership upgrade to the Platinum tier. More generally, Brian and Colin talk about Analog Devices' perspectives on, and plans for addressing the evolving requirements of, various embedded vision applications.

HD Video Processing using Xilinx's Zynq-7000 EPP for Intelligent Video Systems
Rapidly emerging applications in the area of embedded vision require ability to do real-time processing of one or more streams of HD video at high frame rates. In this demonstration, Xilinx's Navanee Sundaramoorthy, Product Manager for Processing Platforms, shows how you can use the Zynq-7000 Extensible Processing Platform with dual ARM Cortex-A9 processors and programmable logic for such applications. The programmable logic in the Zynq Z7020 brings 1080p60 video in and out of the device, as well as doing high data rate video processing.

More Videos


HDR Sensors for Embedded Vision
At the late-March 2012 Embedded Vision Alliance Summit, Eric Gregori and Shehrzad Qureshi from BDTI presented a helpful overview of CCD and CMOS image sensor technology. This article extends the topic to cover so-called HDR (high dynamic range) or WDR (wide dynamic range) sensors. HDR and WDR mean the same thing-it's just a matter of how you use each axis of your dynamic range graph. This is an interesting topic because many embedded vision applications require equivalent functionality in all real-scene environments. Standard CMOS and CCD sensors achieve up to ~72 dB dynamic range. This result is sufficient for the great majority of scene conditions. However, some commonly encountered scenes exist which overwhelm such sensors. More

Gesture Recognition: Enabling Natural Interactions with Electronics
Over the past few years, gesture recognition has made its debut in entertainment and gaming markets. Now, gesture recognition is becoming a commonplace technology, enabling humans and machines to interface more easily in the home, the automobile and at work. Imagine a person sitting on a couch, controlling the lights and TV with a wave of his hand. These and other capabilities are being realized via gesture recognition technologies, which enable natural interactions with the electronics that surround us. Gesture recognition has long been researched with 2D vision, but with the advent of 3D sensor technology, its applications are now more diverse, spanning a variety of markets. More

More Articles


Identity, Age and Emotion: Facial Recognition Garners Abundant Promotion

Samsung's Message To The Worried: Don't be So Paranoid

Sit Up Straight! A Webcam Turns Your Computer Display into a Parental Surrogate

Facial Recognition Unlock: An iOS Jailbreak, and Samsung's Photo Block

The Analog Joystick: An Image Sensor-Plus-Accessory Combo Creates One that's Slick

More News

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top