fbpx

Embedded Vision Insights: March 29, 2016 Edition

EVA180x100

In this edition of Embedded Vision Insights:

LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit

The Embedded Vision Summit – the only conference focused entirely on developing products using computer vision – is just a month away, May 2-4 in Santa Clara, California, and the conference program is nearly complete. Head to the Summit area of the Alliance website and check out all of the presentations and workshops listed there, including the Deep Learning Day on May 2, keynotes from Google and NASA on May 2 and 3, and workshops on May 4. And then register without further delay for the Summit, as space is limited and seats are filling up!

While you're on the Alliance website, make sure to check out all the other great new content there. It includes the initial entries in a weekly series of columns published in partnership with Vision Systems Design Magazine, covering a variety of vision processing topics. These first two articles provide an overview of embedded vision and the Alliance, along with a discussion of deep learning for vision processing. Also recently published is a blog post from Alliance founder Jeff Bier, discussing the versatility of image sensors versus other sensor types.

Newly published videos on the Alliance website include Cadence's demos of face detection, people detection and traffic sign recognition; Eutecus' demo of stereo vision forward-view ADAS; Imagination Technologies' demos of various ADAS and other computer vision applications, an automotive development platform, and the company's image signal processor; and Texas Instruments' demos of automotive infotainment and the EvoCar technology platform. And multiple Alliance member companies have recently published press releases on new embedded vision technologies and products.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your suggestions on what the Alliance can do to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

"Vision-Based Gesture User Interfaces," a Presentation from QualcommQualcomm
Francis MacDougall, Senior Director of Technology at Qualcomm, presents the "Vision-Based Gesture User Interfaces" tutorial at the May 2014 Embedded Vision Summit. The means by which we interact with the machines around us is undergoing a fundamental transformation. While we may still sometimes need to push buttons, touch displays and trackpads, and raise our voices, we’ll increasingly be able to interact with and control our devices simply by signaling with our fingers, gesturing with our hands, and moving our bodies. This presentation explains how gestures fit into the spectrum of advanced user interface options, compares and contrasts the various 2-D and 3-D technologies (vision and other) available to implement gesture interfaces, gives examples of the various gestures (and means of discerning them) currently in use by systems manufacturers, and forecasts how the gesture interface market may evolve in the future.

"Combining Vision, Machine Learning and Natural Language Processing to Answer Everyday Questions," a Presentation from QM ScientificQM Scientific
Faris Alqadah, CEO and Co-Founder of QM Scientific, delivers the presentation "Combining Vision, Machine Learning and Natural Language Processing to Answer Everyday Questions" at the May 2015 Embedded Vision Alliance Member Meeting. Faris explains how his company's GPU-accelerated Quazi platform combines proprietary natural language processing, computer vision and machine learning technologies to extract, connect and organize millions of products, prices and consumer preferences from any data source.

More Videos

FEATURED ARTICLES

The Caffe Deep Learning Framework: An Interview with the Core DevelopersUC Berkeley Vision and Learning Center
Spend any amount of time researching the topic of deep learning and you'll inevitably come across the term Caffe. This convolutional neural network (CNN) framework, originally named DeCAF, was initially developed by Yangqing Jia (now a research scientist at Google), during his Ph.D. program at the University of California, Berkeley. It is now maintained by U.C. Berkeley's Vision and Learning Center (BVLC), whose faculty members include Pieter Abbeel, Jitendra Malik, and the founder of Lytro, Ren Ng. Deep learning has rapidly become a leading method for object classification and other functions in computer vision, and Caffe is a popular platform for creating, training, evaluating and deploying deep neural networks. The Embedded Vision Alliance recently conducted an interview with Evan Shelhamer, Jeff Donahue and Jonathan Long, the core Caffe developers at U.C. Berkeley, to understand the history, current status and future plans for Caffe. More

Mobile World Congress Show Report: VR/AR, 360-degree Video, 5G, and Deep Learning Key TrendsMobile World Congress
For the first time in its history, the Mobile World Congress in Barcelona welcomed more than 100,000 visitors, most of them in suits. We visit quite a few shows, writes videantis Vice President of Marketing Marco Jacobs, and one of the things that stands out with this one is how well it is run. Whether you’re at the airport, subway, or the registration desk, there’s show staff right there holding signs and letting you know where to go. Even though the show grew quite a bit again compared to last year, lines were shorter and seats were easier to find at the many food stands and coffee shops. More

More Articles

FEATURED NEWS

NVIDIA's GPU Technology Conference (GTC): Deep Learning and Other Vision Topics A'plenty

Analog Devices Enhances IoT Sensing Portfolio with SNAP Sensor Acquisition

BCON – Basler’s New Unique Interface for dart Camera Series

Movidius and DJI Bring Vision-Based Autonomy to DJI Phantom 4

DJI Launches New Era of Intelligent Flying Cameras

More News

UPCOMING INDUSTRY EVENTS

NVIDIA GPU Technology Conference (GTC): April 4-7, 2016, San Jose, California

Silicon Valley Robot Block Party: April 6, 2016, San Jose, California

Embedded Vision Summit: May 2-4, 2016, Santa Clara, California

NXP FTF Technology Forum: May 16-19, 2016, Austin, Texas

Augmented World Expo: June 1-2, 2016, Santa Clara, California

Low-Power Image Recognition Challenge (LPIRC): June 5, 2016, Austin, Texas

Sensors Expo: June 21-23, 2016, San Jose, California

IEEE Computer Vision and Pattern Recognition (CVPR) Conference: June 26-July 1, 2016, Las Vegas, Nevada

IEEE International Conference on Image Processing (ICIP): September 25-28, 2016, Phoenix, Arizona

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top