fbpx

Embedded Vision Insights: December 17, 2019 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Vision Tank

The Embedded Vision Alliance is now accepting applications for the fifth annual Vision Tank start-up competition. Are you an early-stage start-up company developing a new product or service incorporating or enabling computer vision or visual AI? Do you want to raise awareness of your company and products with vision industry experts, investors and developers? The Vision Tank start-up competition offers early-stage companies the opportunity to present their new products or product ideas to more than 1,400 influencers and product creators at the 2020 Embedded Vision Summit. For more information, and to enter, please see the program page.

The Alliance is also now accepting applications for the 2020 Vision Product of the Year Awards competition. The Vision Product of the Year Awards are open to Member companies of the Alliance and celebrate the innovation of the industry's leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes your leadership in computer vision as evaluated by independent industry experts; winners are announced at the 2020 Embedded Vision Summit. For more information, and to enter, please see the program page.

Registration for the 2020 Embedded Vision Summit, the premier event for innovators developing products with visual intelligence at the edge and in the cloud, is now open. Be sure to register today with promo code HOLIDAY20 to receive your Super Early Bird Discount plus an additional $50 off, our holiday gift to you! Also note that the Alliance is still accepting Summit presentation proposals on a wide range of topics related to practical computer vision and visual AI applications; for the 2020 Summit we are further expanding the program to address other types of sensor-based AI as well, such as audio, speech and radar. For more information, and to submit a proposal, please see here.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

TRAINING DATA FOR DEEP LEARNING

Training Data for Your CNN: What You Need and How to Get ItAquifi
A fundamental building block for AI development is the development of a proper training set to allow effective training of neural nets. Developing such a training set constitutes a major challenge, requiring multi-disciplinary knowledge spanning data science, computer vision, machine learning and project management. This talk from Carlo Dal Mutto, CTO of Aquifi, provides an outline of common workflows for developing training sets for AI applications, touching on how to start, how to leverage existing tools and labeling companies and how to assess whether the developed database is sufficiently comprehensive and capable of effectively sustaining AI algorithm development for computer vision applications.

Can Simulation Solve the Training Data Problem?Mindtech
While there has been rapid progress in the adoption of neural networks and the evolution of neural network structures, the problem of training data remains. Even companies with access to the largest data sets still need additional data, and in particular corner case data. At the same time, many companies simply don’t have access to “big data” and need an alternative solution. The advent of powerful GPUs, able to give near photo realistic results in simulations, has led to the evolution of simulators for the creation of training and test data for AI systems. This talk from Peter McGuinness, Vice President of AI and Services at Mindtech, discusses the benefits of such an approach, as well as looking at the issues and limitations introduced by using artificial data. While this methodology has been driven by the automotive industry, its relevance to other industries such as surveillance and retail is also discussed.

EDGE AI AND VISION PROCESSING

Building AI Cameras with Intel Movidius VPUsIntel
Today, the availability of silicon platforms with sufficient performance and compute efficiency to run AI inference algorithms has given rise to many interesting edge products: from smart home security cameras to personal assistants to cameras for industrial automation and visual retail cameras for cashier-less stores. Intel supports a vibrant ecosystem of device makers, ODMs, integrators and end users. The company is ushering in a new era of AI cameras to truly assist people in what they do at home and at work every day. In this presentation, Gary Brown, Director of AI Marketing at Intel, showcases the Intel Movidius Vision Processor Unit (VPU) and its capabilities. He shares details of new ways to build AI cameras with VPUs, and provides an overview of the software development environment, including the Intel distribution of OpenVINO toolkit that enables AI application deployment. With the OpenVINO toolkit, users can prototype and deploy their own deep neural network algorithms using the Movidius VPU.

Highly Efficient, Scalable Vision and AI Processors IP for the EdgeCadence
This presentation from Pulin Desai, Vision Product Marketing Director at Cadence, describes the architecture of the latest Tensilica-based vision and AI processor family, and illustrates how easily vision algorithms (e.g., SLAM, 3D capture) and AI inference can be implemented on these processors. See how this low-power architecture simplifies development of a scalable vision and AI solution from low to high end for mobile, AR/VR, surveillance and automotive markets.

UPCOMING INDUSTRY EVENTS

Hailo Webinar – A Computer Architecture Renaissance: Energy-efficient Deep Learning Processors for Machine Vision: December 17, 2019, 9:00 am PT

Consumer Electronics Show: January 7-10, 2020, Last Vegas, Nevada

Embedded Vision Summit: May 18-21, 2020, Santa Clara, California

More Events

FEATURED NEWS

Bitfury AI Joins the Embedded Vision Alliance

Intel's RealSense Lidar Camera Technology Redefines Computer Vision

An AI Language for Machines from Gyrfalcon Technology Targets a World Where Big Data is Video

OmniVision Unveils 8.3 Megapixel Automotive Image Sensors with LED Flicker Mitigation and 140dB High Dynamic Range

Qualcomm Reveals Its Product Roadma for Making 5G Mainstream in 2020

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top