The Camera is the Ultimate Link Between the Real World and Computers

wyze-3

This blog post was originally published in the mid-November 2017 edition of BDTI's InsideDSP newsletter. It is reprinted here with the permission of BDTI.

As a kid, I was fascinated with electronics – especially digital electronics. The idea that one could build a computing machine out of simple logic gates was a revelation, and designing such things was thrilling. But as powerful and flexible as digital computers are, we live in an analog world. Hence, analog-to-digital converters play a critical role.

When I first encountered them, I found A/D converters exotic – even magical. With them, one could not only construct a computer, but also enable that computer to gather and process data from the physical world. And the physical world is overflowing with data – from simple data like temperature and pressure, to rich data like audio and radio signals.

As with many things electronic, analog-to-digital converters started out bulky, expensive and power hungry, but evolved through many generations to become the opposite: tiny, cheap and energy efficient. Improvements in microprocessors have made embedding processors into all sorts of cost- and power-sensitive systems a reality. But it’s A/D converters that have made it possible for these processors to perform useful tasks — in cars, appliances, fitness monitors, hearing aids and countless other systems. These days, I suspect that most embedded microprocessors are connected to one or more A/D converters, monitoring some aspect of the world around them.

What does this have to do with cameras?

Cameras, like A/D converters, provide a link between the physical world and the digital world.

And the information gathered by cameras goes well beyond that gathered by A/D converters. For example, a camera (coupled with a powerful processor and sophisticated machine learning algorithms) can detect when a person is present, determine their gender and age, discern their emotions from facial expressions, track their gaze and even read their lips. (Try that with an A/D converter!)

If the idea of the camera as the successor to the A/D converter sounds odd, that’s only because until very recently, cameras were in the bulky, expensive and power hungry stage of their development. But these days, cameras are rapidly becoming tiny, cheap and energy efficient. Fortuitously, this is happening at the same time that visual perception algorithms are attaining human-like accuracy, thanks to deep learning techniques. And, critically, processors with sufficient performance to run these algorithms are improving their cost- and energy efficiency by orders of magnitude.

So, just as it has become typical for embedded microprocessors to be connected to one or more A/D converters, soon it will be typical for these processors to be connected to one or more cameras. Of course, this is already evident in the roughly 1.5 billion mobile phones shipped each year. Other high-volume examples include networked surveillance cameras (approximately 100 million per year) and automotive safety systems (by 2020, analysts forecast that ADAS systems will incorporate 100 million cameras annually).

Thanks to improvements in cameras, algorithms and processors, the potential of computer vision is finally coming to fruition. Computer vision can now be implemented in almost any system, whether to enable autonomy, safety, security, ease-of-use or other capabilities. Arguably, we’re at a point where the main limitation on what we can do with computer vision is what we can imagine.

One of the best ways to inspire your imagination about ways to use computer vision to enable new products (or add valuable capabilities to existing ones) is to study examples of what others have done. The best place to do this is at the Embedded Vision Summit, taking place May 22-24, 2018 in Santa Clara, California. Over the past five years, the Summit has become the preeminent event for people building products incorporating vision. We’re now assembling The Summit’s unique program, which will feature successful vision system developers sharing their challenges, techniques and lessons learned. Mark your calendar and plan to be there! Registration is now open on the Summit website.

Jeff Bier
Co-Founder and President, BDTI
Founder, Embedded Vision Alliance

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top