Deep Learning Takes Center Stage at the Embedded Vision Summit, May 2-4

Koffein_-_Caffeine

As the recent 4-to-1 drubbing of Go world champion Lee Sedol by Google's DeepMind AlphaGo program signifies, artificial intelligence has entered mainstream awareness and adoption. It's enabled by the evolution of traditional neural network approaches, the steadily increasing processing "muscle" of CPUs (aided by acceleration via FPGAs, GPUs and dedicated co-processors), and the steadily decreasing cost of system memory. Among the most compelling uses for so-called "deep learning" techniques such as convolutional neural networks is image analysis and object identification, where the approach offers compelling advantages over conventional computer vision algorithms.

Traditional rule-based object recognition algorithms require the mathematical modeling and algorithmic coding of a software function capable of reliably identifying a particular object within a still image or video frame. Unfortunately, even if such an approach can be made reasonably reliable under ideal conditions, the quality of results frequently falls apart when the camera’s viewing angle is non-deal, for example, or under degraded lighting conditions.

Conversely, a convolutional neural network tuned to identify a particular object or set of objects self-trains in response to being "fed" a set of reference images. The more examples of the object to be identified, the more accurate the results; variations in lighting, color and perspective can even be computer-generated. And re-tuning the network to identify different objects involves only discarding the existing neural array "weights" and re-training the network with a different set of reference images, versus re-coding a new set of algorithms from scratch.

Reflective of the fact that deep learning has so quickly and pervasively been adopted by the computer vision community, it's getting a dedicated day at the upcoming Embedded Vision Summit. The May 2nd Deep Learning Day begins with an insightful keynote, "Large-Scale Deep Learning for Building Intelligent Computer Systems," from Jeff Dean, Senior Fellow at Google Research. Two parallel presentation tracks are then available to you:

  • A technical tutorial, focusing on designing, implementing and training CNNs, and
  • A set of business insight presentations covering deep-learning-enabled computer vision applications and markets

Also available is a half-day hands-on Caffe/CNN tutorial, which will teach you how to use the Caffe deep learning framework, delivered by the primary developers at the Berkeley Vision and Learning Center.

Note, too, that deep-learning-related content will extend beyond May 2 to the remainder of the conference, with a number of talks on deep learning implementation techniques and enabling technologies. The Embedded Vision Summit, an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software, takes place in Santa Clara, California May 2-4, 2016. Register now, as space is limited and seats are filling up!

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top