fbpx

Five Computer Vision Trends That Will Change Our World

Power of Attending (Engineering) Thumbnail for Youtube

The Embedded Vision Summit, our yearly conference on practical computer vision, is coming up next week and I’ve been reflecting on what’s happened in the last year.  I wanted to take a moment to jot down five key trends in visual computing that I think are going to have a big impact on industry and society over the next few years.

I found these fascinating to think about, and I hope you do too.

Trend #1: A flood of image data

Digital cameras are cheap, high quality, and rapidly becoming ubiquitous — there are at least two on every smartphone, for example, and we’ll be seeing them pop up in more and more places.  The large amount of data per image times the great number of images produced per camera times the ever-growing number of cameras means that image sensor data will dwarf all other sensor data. (Thanks to Chris Rowan of Cognite Ventures for this insight!)

Trend #2: Deep learning

You’re undoubtedly sick of hearing about how “deep learning changes everything,” but in the case of computer vision, it’s actually true.  Deep learning algorithms have allowed computers to recognize and detect objects in images with unprecedented accuracy, and to even understand the relationships between them (for example, being able to look at an image and conclude not just “there’s a ball and some kids and a field” but rather “this is an image of children playing soccer”). Combine this with the wealth of camera data in trend #1 above and you have something big.

Trend #3: 3D sensing

Time-of-flight and structured light sensors, along with stereo vision, mean that computers can now not just see the world around them but can actually understand the structure and scale of the world around them — how far away something is, how big it is, etc.  In the not too distant future every phone will basically be a 3D scanner.  And that leads us to…

Trend #4: SLAM

SLAM stands for “simultaneous location and mapping.” You can think of it as algorithms that enable a device to construct a three-dimensional map of the world and determine where the device is positioned on it surroundings.  This enables robots, self driving cars, and other devices to navigate and make sense of their surroundings.  As I wrote about in a recent Impulse Response column, I believe SLAM is set to be the “next GPS."

Trend #5: The edge revolution

It’s amazing enough that we can do any of the above on big computers up in the cloud.  But what’s really amazing is that we’re at the cusp of a revolution when all of these computer vision tasks can be done on small computers on the “edge” — that is, on devices ranging from wearables and mobile phones to embedded processors in cars.  For the “why” this is possible, I’m giving a talk next week at the Summit called “1000x in Three Years: How Embedded Vision is Transitioning from Exotic to Everyday”.)  But forget about the “why” for a second, let’s concentrate on the “what.”  Imagine a world in which toys recognize their owners and know where they are in the household.  (Maybe someday they will even put themselves away!).  Or a home security camera that only uploads images of unknown people, thus protecting both you and your privacy.  All of these require processing at the edge–that is, in the device itself.


As I reflect on these trends and the progress the industry has made over the last year in driving computer vision into the mainstream, I’m left with one thought: the next few years are going to be very interesting indeed!

If you find this stuff as fascinating as I do, please join me at the Embedded Vision Summit next week (May 1-3) in Santa Clara, California.  The event will host 90 speakers in 5 tracks over 3 days, all focused on practical, deployable computer vision.  In addition to technical and business tracks, we have a new fundamentals track focused on getting you quickly up to speed in visual computing.  If you’re interested in start-ups, the Vision Tank start-up competition and our Entrepreneurs’ Panel should be fascinating.  And then there’s the Vision Technology Showcase, where 50 exhibitors will give more than 100 demos of the latest technology for adding vision to products — along with a number of new product introductions.  I can’t wait!

By Jeff Bier
Founder, Embedded Vision Alliance

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top