fbpx

Embedded Vision Insights: May 24, 2016 Edition

EVA180x100

In this edition of Embedded Vision Insights:





LETTER FROM THE EDITOR

Dear Colleague,TensorFlow

I’m happy to announce the first two (of what will end up being
more than three dozen) presentation videos from the Embedded Vision
Summit are now available on the Alliance website. In his keynote talk, “Large-Scale
Deep Learning for Building Intelligent Computer Systems
,” Google
Senior Fellow Jeff Dean highlights some of ways in which his company
trains large models quickly on large datasets, and discusses different
approaches for deploying machine learning models in environments
ranging from large datacenters to mobile devices. And in “Computational
Photography: Understanding and Expanding the Capabilities of Standard
Cameras
,” NVIDIA Senior Research Scientist Orazio Gallo explains
the algorithmic processing that cameras perform to produce high-quality
images, and how this processing interplays with computer vision
algorithms.

Also now available are the slides
from Embedded Vision Summit business and technical presentations
in
PDF format. And make sure you also regularly visit the Alliance’s YouTube
channel
, where new demo videos from the Summit are being steadily
published
(they will eventually also appear on the Alliance
website). Speaking of the Alliance website, while you’re on it, be sure
to check out all the other great new content there, including several
additional published chapters in ARM’s “Guide
to OpenCL Optimizing Convolution
,” a technical reference manual
which gives implementation examples of algorithm acceleration using a
Mali Midgard GPU. Also newly published are several
columns from the Alliance
in partnership with Vision Systems Design Magazine, and
nearly
two dozen press releases
from the Alliance and its member companies.

Thanks as always for your support of the Embedded Vision
Alliance, and for your interest in and contributions to embedded vision
technologies, products and applications. If you have an idea as to how
the Alliance can better serve your needs, please contact me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

“Evolving Algorithmic Requirements for Recognition and
Classification in Augmented Reality,” a
Presentation from NXP
NXP
Simon Morris, former CEO of CogniVue (now
part of NXP), presents the “Evolving Algorithmic Requirements for
Recognition and Classification in Augmented Reality” tutorial at the
May 2014 Embedded Vision Summit. Augmented reality (AR) applications
are based on accurately computing a camera’s 6 degrees of freedom
(6DOF) position in 3-dimensional space, also known as its “pose”. In
vision-based approaches to AR, the most common and basic approach to
determine a camera’s pose is with known fiducial markers (typically
square, black and white patterns that encode information about the
required graphic overlay). The position of the known marker is used
along with camera calibration to accurately overlay the 3D graphics. In
marker-less AR, the problem of finding the camera pose requires
significantly more complex and sophisticated algorithms, e.g. disparity
mapping, feature detection, optical flow, and object classification.
This presentation compares and contrasts the typical algorithmic
processing flow and processor loading for both marker-based and
marker-less AR. Processing loading and power requirements are discussed
in terms of the constraints associated with mobile platforms.


“Drones at Work,” a Presentation from KespryKespry
Marcus Hammond, Robotics Lead at Kespry,
delivers the presentation, “Drones at Work,” at the March 2016 Embedded
Vision Alliance Member Meeting. Hammond explains how his company is
using autonomous drones and computer vision to provide actionable data
for the mining and construction industries.


More Videos

FEATURED ARTICLES

Deep Learning Use Cases for Computer VisionTractica
The enterprise applications for deep
learning have only scratched the surface of their potential
applicability and use cases.  Because it is data agnostic, deep
learning is poised to be used in almost every enterprise vertical
market, including agriculture, media, manufacturing, medical,
healthcare, and retail, to name a few.  Deep learning is
particularly applicable to computer vision systems because it promises
to be less costly, more accurate, and more reliable than traditional
programming approaches. Some of the most successful companies in the
world have been early adopters of this technology.  Although the
enterprise market for deep learning is still small in relation to the
total enterprise software sector, the variety, breadth, and scope of
the applications than deep learning is being considered for suggests
that a tremendous growth opportunity exists. This Tractica white paper,
published in partnership with the Embedded Vision Alliance, covers the
market for computer vision and deep learning technologies, providing
real world use cases of how they are being used in various industry
verticals.  The verticals covered include agriculture, media,
manufacturing, medical, healthcare, and retail. More


Digital Video Stabilization: Smooth Footage Without Expensive MechanicsCEVA
From drones to handheld devices, the
rising demand for video cameras has made them ubiquitous, constantly
driving down size and cost while pushing up resolution and overall
quality. One of the main challenges in this field is stabilizing the
image to generate clear, smooth footage. In this article, CEVA Director
of Product Marketing Liran Bar discusses the challenges that
stabilization poses, and the pros and cons of existing solutions. He
also gives a brief technical overview of CEVA’s software-based
stabilization solution. More


More Articles

FEATURED
COMMUNITY DISCUSSIONS

Qualcomm
R&D – Embedded Vision Algorithm Development

More Community Discussions

FEATURED NEWS

ARM Acquires Apical, a Global Leader in Imaging and Embedded Computer Vision

FotoNation Partners With Kyocera to Develop Intelligent Automotive Camera Technology

NXP Demonstrates Complete Autonomous Vehicle Platform Using NXP Silicon at Each ADAS Node

Analog Devices and Cambridge Consultants Collaborate on Cost-Effective Monitoring System to Reduce Parking Frustrations

More News

UPCOMING INDUSTRY
EVENTS

Churchill
Club 18th Annual
Top 10 Tech Trends
: May 25, 2016, Santa Clara, California
Churchill Club

What new tech trends
will emerge in the next several years? Find out at one of Churchill
Club’s most anticipated events of the year. Join us as we welcome some
of the leading, and most opinionated, technology and business
luminaries as they evaluate predictions for the years ahead.

Augmented World Expo:
June 1-2, 2016, Santa Clara, California
Augmented World Expo

Now in its 7th year,
AWE USA is the largest event in North America exploring tech giving
people superpowers: augmented reality, virtual reality and wearable
tech. Join over 4,000 attendees, 200+ speakers and 200+ exhibitors in
the heart of Silicon Valley at the Santa Clara Convention Center.

Low-Power
Image Recognition Challenge (LPIRC)
: June
5, 2016, Austin,
Texas

Sensors
Expo
: June 21-23, 2016, San Jose, California

Sensors Expo is the
only event focused on sensors and sensor-integrated systems. Experience
300+ sensors exhibitors, invaluable networking, and 55+ conference
sessions, including one from Embedded Vision Alliance founder Jeff
Bier. Use code EMBEDDED50 for
conference pass discounts or a free Expo Hall pass.

Sensors Expo

IEEE
Computer Vision and Pattern Recognition (CVPR) Conference
:
June 26-July 1, 2016, Las Vegas, Nevada

AutoSens
2016
: September 20-22, 2016, Brussels, Belgium
AutoSens

AutoSens connects
technologists in all disciplines of vehicle perception to solve shared
challenges and advance ADAS vehicle technologies. Bringing together
engineers from disciplines including automotive imaging, LiDAR, radar,
image processing, computer vision, in-car networking, testing and
validation, certification and standards, AutoSens is a collaborative
environment geared towards supporting engineering activities.
Use code ASCD15EV
for
a 15% registration discount
.

IEEE
International Conference on Image Processing (ICIP)
:
September 25-28, 2016, Phoenix, Arizona

Embedded
Vision Summit
:
May 1-3, 2017, Santa Clara, California

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top