Dear Colleague,TensorFlow Training Classes

TensorFlow has become a popular framework for creating machine learning-based computer vision applications, especially for the development of deep neural networks. If you’re planning to develop computer vision applications using deep learning and want to understand how to use TensorFlow to do it, then don’t miss next Thursday's full-day, hands-on training class organized by the Embedded Vision Alliance: Deep Learning for Computer Vision with TensorFlow. It takes place in Santa Clara, California on July 13. Learn more and register at https://tensorflow.embedded-vision.com.

If you're interested in creating efficient computer vision software for embedded applications, check out next Wednesday's free webinar, "OpenCV on Zynq: Accelerating 4k60 Dense Optical Flow and Stereo Vision," delivered by Xilinx and organized by the Embedded Vision Alliance. It takes place on July 12 at 10 am PT. Xilinx will present a new approach that enables designers to unleash the power of FPGAs using hardware-tuned OpenCV libraries, a familiar C/C++ development environment, and readily available hardware development platforms. To register, see the event page.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance


Is Vision the New Wireless?Qualcomm
Over the past 20 years, digital wireless communications has become an essential technology for many industries, and a primary driver for the electronics industry. Today, computer vision is showing signs of following a similar trajectory. Once used only in low-volume applications such as manufacturing inspection, vision is now becoming an essential technology for a wide range of mass-market devices, from cars to drones to mobile phones. In this presentation, Raj Talluri, Senior Vice President of Product Management at Qualcomm Technologies, examines the motivations for incorporating vision into diverse products, presents case studies that illuminate the current state of vision technology in high-volume products, and explores critical challenges to ubiquitous deployment of visual intelligence.

Lessons Learned from Bringing Mobile and Embedded Vision Products to MarketARM
Great news: technology is finally at a point where we can build sophisticated computer vision applications that run on mass market devices, like mobile phones and cars and vacuum cleaners. Not-so-great news: developing vision applications is hard — maybe uniquely so. Technical and business challenges abound. Developers can quickly come up against thermal and power limitations. Software may perform well on one platform, but poorly on another, similar platform. These are some of the problems that can sink your product. In this talk, Tim Hartley, Senior Product Manager in the Imaging and Vision Group at ARM, presents case studies in which various computer vision challenges put product development at risk, and explores how they are being addressed by leading product developers. What lessons are there for businesses working in this area? What key challenges remain to be overcome to enable ubiquitous visual intelligence?


Computational Photography: Understanding and Expanding the Capabilities of Standard CamerasNVIDIA
Today's digital cameras, even at the entry level, produce pictures with quality comparable to that of high-end cameras of a decade ago. Image processing and computational photography algorithms play a significant role in this improvement. In this talk, Orazio Gallo, Senior Research Scientist at NVIDIA, explains the algorithmic processing that cameras perform to produce high-quality images and how this processing interplays with computer vision algorithms. He then discusses algorithms that expand the capabilities of standard cameras by allowing more accurate measurements or new applications.

How Computer Vision Is Accelerating the Future of Virtual RealityAMD
Virtual reality (VR) is the new focus for a wide variety of applications including entertainment, gaming, medical, science, and many others. The technology driving the VR user experience has advanced rapidly in the past few years, and it is now poised to proliferate into these applications with solid products that offer a range of cost, performance and capabilities. The next question is: how does computer vision intersect this emerging modality? Already we are seeing examples of the integration of computer vision and VR, for example for simple eye tracking and gesture recognition. This talk from Allen Rush, Fellow at AMD, explores how we can expect more complex computer vision capabilities to become part of the VR landscape and the business and technical challenges that must be overcome to realize these compelling capabilities.


AIA Webinar – The Future of Embedded Vision Systems: July 11, 2017, 11:00 am PT

Xilinx Webinar Series – OpenCV on Zynq: Accelerating 4k60 Dense Optical Flow and Stereo Vision: July 12, 2017, 10:00 am PT

TensorFlow Training Class: July 13, 2017, Santa Clara, California

TensorFlow Training Class: September 7, 2017, Hamburg, Germany

More Events


AImotive Releases aiWare: The First of Its Kind, AI-Optimized Hardware Accelerator For Autonomous Driving

The Largest Camera Series in the Market Continues to Grow: 20 New Basler ace Models with IMX Sensors from Sony

Synopsys Embedded Vision Processor IP Quadruples Neural Network Performance for Machine Learning Applications

ArcSoft and CEVA Partner to Raise the Performance Level of Smartphone Cameras

LucidCam Launches While Google Heads Towards VR180

More News


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top