fbpx

Embedded Vision Insights: March 1, 2016 Edition

EVA180x100

In this edition of Embedded Vision Insights:




LETTER FROM THE EDITOR

Dear Colleague,Vision Tank

One final reminder: today
is the deadline for entries for the Vision
Tank
, a deep learning- and vision-based product competition whose
finalists will present at the Embedded Vision Summit.
Don’t delay in submitting your application; details are available on
the Vision
Tank page
. And while you’re at it, register for the
Summit
, an educational forum for product creators interested in
incorporating visual intelligence into electronic systems and software,
taking place in Santa Clara, California May 2-4. Receive a 15% Early
Bird discount by using promotional code 09EVI.

While you’re on the Alliance website, make sure to also check
out the great new content there. Take a look, for example, at “Deep
Learning from a Mobile Perspective
,” a presentation delivered by
Caffe creator Yangqing Jia at last week’s Convolutional Neural Networks
for Vision tutorial, which will be repeated as a workshop at the May
Summit
. And a new technical article, “OpenVX
Enables Portable, Efficient Vision Software
,” comes from Alliance
member companies BDTI, Cadence, Itseez and Vivante. It describes how
this maturing API from the Khronos Group enables embedded vision
software developers to efficiently harness the processing resources
available in SoCs and systems.

Take a look, too, at the recent video demonstrations of
embedded vision technologies and products from Alliance member
companies:

In addition, numerous
product announcements
have been published by Alliance member
companies, many in conjunction with last week’s Embedded World and
Mobile World Congress.

Thanks as always for your support of the Embedded Vision
Alliance, and for your interest in and contributions to embedded vision
technologies, products and applications. If you have an idea as to how
the Alliance can better serve your needs, please contact me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

“Fast 3D Object Recognition in Real-World Environments,” a
Presentation from VanGogh Imaging
VanGogh Imaging
Ken Lee, Founder of VanGogh Imaging,
presents the “Fast 3D Object Recognition in Real-World Environments”
tutorial at the May 2014 Embedded Vision Summit. Real-time 3D object
recognition can be computationally intensive and difficult to implement
when there are a lot of other objects (i.e. clutter) around the target.
There are several approaches to deal with the clutter problem, but most
are computationally expensive. Lee describes an algorithm that uses a
robust descriptor, fast data structures, and an efficient sampling with
a parallel implementation on an FPGA platform. VanGogh Imaging’s method
can recognize multiple model instances in the scene and provide their
position and as well as orientation. The company’s algorithm scales
well with the number of models and runs in linear time.


Embedded Vision: Enabling Smarter Mobile Apps and DevicesNVIDIA
For decades, computer vision technology
was found mainly in university laboratories and a few niche
applications. Today, virtually every tablet and smartphone is capable
of sophisticated vision functions such as hand gesture recognition,
face recognition, gaze tracking, and object recognition. These
capabilities are being used to enable new types of applications, user
interfaces, and use cases for mobile devices. This presentation at the
2014 GTC (NVIDIA’s GPU Technology Conference) by Jeff Bier, Founder and
President of the Embedded Vision Alliance, illuminate the key drivers
behind the rapid proliferation of vision capabilities in mobile
devices, and highlight some of the most innovative processor
architectures, sensors, tools and APIs being used for mobile vision.


More Videos

FEATURED ARTICLES

Computer Vision Metrics: Survey, Taxonomy, and AnalysisComputer Vision Metrics
The Embedded Vision Alliance is pleased
to provide you with a free electronic copy of this in-depth technical
resource, with book chapters available on the Alliance website in both
HTML and PDF formats. Computer
Vision Metrics
provides an extensive survey and analysis of over
100 current and historical feature description and machine vision
methods, with a detailed taxonomy for local, regional and global
features. The book provides the necessary background to develop
intuition about why interest point detectors and feature descriptors
actually work, as well as how they are designed, along with
observations about tuning the methods for achieving robustness and
invariance targets for specific applications. Also see the Alliance’s
interview with author Scott Krig
, as well as Krig’s
presentation
at an Alliance Member Meeting. More


Unlocking the Value in VideoBDTI
Today, billions of hours of video are
collected each year, notes Alliance founder Jeff Bier, but most of it
is never used, because we don’t have a practical way to extract
actionable information from it. A new generation of computer vision
solutions, powered by deep neural networks, will soon change this,
unleashing the tremendous value that’s currently locked away in our
video files. More


More Articles

FEATURED
COMMUNITY DISCUSSIONS

Self-Driving
Car’s Next Endeavor
(Job Posting)

More Community Discussions

FEATURED NEWS

Embedded
Vision Summit 2016 Announces Keynote Speakers
Headlining
Event on Bringing Visual Intelligence to Products

ON Semiconductor Unveils Next Generation 1.2
Mpixel CMOS Image Sensor with Advanced Global Shutter Technology

Save
Development Time and Cost with TI’s New Scalable Auto Infotainment
Solutions
, Extending the “Jacinto 6” Family for the Next
Generation of Entry-level and Display Audio Products

PowerVR GPUs From Imagination Pass
OpenVX Conformance With Khronos

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top