fbpx

In Embedded Vision, Sensors Rule: Part Two

EE_Times-logo

By Vin Ratford
Executive Director, Embedded Vision Alliance

This blog post was originally published at EE Times' Industrial Control Design Line. It is reprinted here with the permission of EE Times.

This week, Vin Ratford builds on his August 16 guest blog post, “Sensors Rule,” and highlights how sensors have an impact on the overall architecture and capabilities of vision systems. Vin has a keen eye for technology trends, having spent over 30 years in the electronics industry, most recently as senior VP of Xilinx. — Jeff

In my last embedded vision blog post, I discussed how the choice of image sensor technology is a key determinant of a vision system’s capabilities, and how quickly sensor technology is advancing. Lately I’ve realized that new image sensor capabilities (such as 3D, higher resolution, faster frame rates, and increased dynamic range) are also having a dramatic impact on system architecture and algorithms. This was driven home to me by a talk given by Professor Masatoshi Ishikawa of the University of Tokyo during the July 2013 Embedded Vision Alliance Member meeting.

Professor Ishikawa’s talk, “High Speed Vision and Its Applications,” highlighted some fascinating examples of how vision systems can exceed human capabilities. According to Professor Ishikawa, the key to more-capable vision systems is a high-frame-rate sensor and a parallel processing architecture with sufficient interconnect and memory bandwidth (I call it the plumbing) to support algorithms for real-time decision making, and to drive actuators and control systems to take action.

Professor Ishikawa’s presentation covered a lot of ground, and I won’t try to recap all of it here (for that, you can view the video of his presentation). There were two key takeaways for me. First, with a high-speed image sensor (e.g., 1,000 fps) algorithms can be much simpler than those required with a conventional (e.g., 60 fps) sensor. Second, a high-speed vision system can achieve real-time response exceeding that of humans.

While advances in robotics and vision systems have been dramatic in recent years, often mimicking human capabilities, the professor has his sights on exceeding what humans can achieve, stating that “robots should be more intelligent than humans and operate too fast to see.”

The best examples of this for me are the batting and throwing robots in Professor Ishikawa’s lab. The batting robot demonstrates abilities that could someday place it on the top 10 hitters list of all time, higher than Ted Williams or Babe Ruth.

For those of you who don’t follow baseball, another dramatic example is Professor Ishikawa’s Rock-Paper-Scissors robot (the YouTube video has 3.8 million views).

Clearly, innovation in sensors is leading to innovation in system and processor architectures, memory, and algorithms — and vice versa creating a “virtuous circle” that will not only enable robots to exceed human capabilities, but will also enable many types of vision-enabled systems, such as those for automotive driver assistance, to achieve new levels of functionality and performance.

The Embedded Vision Alliance was formed to help product developers effectively harness vision capabilities to build smarter products. The Embedded Vision Alliance’s upcoming conference, the Embedded Vision Summit, on October 2, 2013 in the Boston area, will include multiple presentations on 3D-vision system design challenges and techniques, as well as demos and presentations on a variety of other practical techniques for implementing your own super-human capabilities. I hope you will join me there and help continue the conversation.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top