fbpx

Jay Yagnik, Head of Machine Perception Research at Google, presents the "Rapid Evolution and Future of Machine Perception" tutorial at the May 2017 Embedded Vision Summit.

With the advent of deep learning, our ability to build systems that derive insights from perceptual data has increased dramatically. Perceptual data dwarfs almost all other data sources in both its richness and its sheer size. This poses some unique challenges that have forced learning systems to evolve. This technical progress has enabled learning systems to be adopted in mainstream consumer products across the industry, such as Google Photos and YouTube, where learning systems have clearly proven their usefulness.

In this talk, Yagnik reviews the key ingredients of recent progress in machine perception. He also explores the substantial gaps that still need to be filled, and highlights some emerging applications that illustrate the potential future impact of this technology.

logo_2020

May 18 - 21, Santa Clara, California

The preeminent event for practical, deployable computer vision and visual AI, for product creators who want to bring visual intelligence to products.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top