Edge AI and Vision Insights: May 13, 2020 Edition

LETTER FROM THE EDITOR
Dear Colleague,2020 Vision Tank

The Vision Tank is the Edge AI and Vision Alliance’s annual start-up competition, showcasing the best new ventures using computer vision or visual AI in their products and services. It’s open to early-stage companies, and entrants are judged on four criteria: technology innovation, business plan, team and business opportunity. Get to know the 12 semi-finalists in this year’s competition, who are solving some of the world’s most complex problems, by watching their pitch videos. Our panel of judges will narrow this field down to five finalists, to be announced this Friday, May 15th. And on July 16th at 9 am Pacific Time, join us online for free as the finalists pitch their companies and products, competing to win the Judges’ Choice and (after you cast your vote online) Audience Choice awards!

We’ve got big news about the Embedded Vision Summit! Originally scheduled to take place in person later this month in California, the 2020 Embedded Vision Summit is moving to a fully online experience. The event will be made up of four sessions taking place Tuesdays and Thursdays from September 15 through September 24, from 9 am to 2 pm Pacific Time. The Summit remains the premier conference and exhibition for innovators adding computer vision and AI to products. Hear from and interact with over 100 expert speakers and industry leaders on the latest in practical computer vision and edge AI technology—including processors, tools, techniques, algorithms and applications—in both live and on-demand sessions. And see cool demos of the latest building-block technologies from dozens of exhibitors! Attending the Summit is the perfect way to bring your next vision- or AI-based product to life. Are you ready to gain valuable insights and make important connections? Be sure to register today with promo code SUPEREBNL20-V to receive your Super Early Bird Discount!

Also, the Edge AI and Vision Alliance has partnered with OpenCV.Org to allow you to take any of their on-line trainings at a 20% discount. These classes include Computer Vision I (Introduction to OpenCV in both C++ and Python) and II (Applications in OpenCV, both C++ and Python) with OpenCV, as well as Deep Learning with PyTorch. The latter includes 100 hours of free GPU time on Microsoft Azure. Visit https://opencv.org/courses/ to learn more and use discount code ALLIANCE-20 to claim your savings!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

RESOURCE-EFFICIENT NEURAL NETWORKS

Separable Convolutions for Efficient Implementation of CNNs and Other Vision AlgorithmsPhiar
Separable convolutions are an important technique for implementing efficient convolutional neural networks (CNNs), made popular by MobileNet’s use of depthwise separable convolutions. But separable convolutions are not a new concept, and their utility is not limited to CNNs. Separable convolutions have been widely studied and employed in classical computer vision algorithms as well, in order to reduce computation demands. Chen-Ping Yu, Co-founder and CEO of Phiar, begins this presentation with an introduction to separable convolutions. He then explores examples of their application in classical computer vision algorithms and in efficient CNNs, comparing some recent neural network models. Yu also examines practical considerations of when and how to best utilize separable convolutions in order to maximize their benefits.

Fast and Accurate RMNet: A New Neural Network for Embedded VisionIntel
Usually, the top places in deep learning challenges are won by huge neural networks that require massive amounts of data and computation, making them impractical for use in real-time edge applications like security and autonomous driving. In this talk, Ilya Krylov, Software Engineering Manager at Intel, describes a new neural network architecture, RMNet, designed to achieve a balance of accuracy and performance for embedded vision applications. RMNet merges the best practices of network architectures like MobileNets and ResNets. To demonstrate the effectiveness of this new network, Krylov presents an evaluation of RMNet on a person reidentification task. In terms of accuracy, the proposed approach takes third place on the Market-1501 challenge while offering much better inference speed. RMnet can be used for many tasks, such as face recognition, pedestrian detection, vehicle detection and bicycle detection.

EDGE, CLOUD AND HYBRID IMPLEMENTATIONS

Creating Efficient, Flexible and Scalable Cloud Computer Vision Applications: An IntroductionGumGum
Given the growing utility of computer vision applications, how can you deploy these services in high-traffic production environments? In this presentation, Nishita Sant, Computer Vision Manager, and Greg Chu, Senior Computer Vision Scientist, both of GumGum, discuss the company’s approach to the infrastructure for serving computer vision models in the cloud. They elaborate on a few aspects, beginning with modularity of computer vision models, including handling images and video equivalently, creating module pipelines, and designing for library agnosticism so one can leverage open source developments. They also discuss inter-process communication—specifically, the pros and cons of data serialization, and the importance of standardized data formats between training and serving data, which lends itself to automated feedback from serving data for retraining and automated metrics. Finally, they discuss GumGum’s approaches to scaling, including a producer/consumer model, scaling triggers and container orchestration. They illustrate these aspects through examples of image and video processing and module pipelines.

Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision ProductCocoonHealth
In this presentation, Pavan Kumar, Co-founder and CTO of Cocoon Health (formerly Cocoon Cam) explains how his company is evolving its use of edge and cloud vision computing in continuing to bring new capabilities to the product category of baby monitors.

FEATURED NEWS

OmniVision Launches Automotive SoC for Entry-Level Rearview Cameras With Low-Light Performance, Low Power and Small Size

MediaTek Unveils 5G-Integrated Dimensity 1000+ Chip for Smartphones

AImotive’s ISO26262-certified Simulator Powers Continuous Integration and Delivery of Automated Driving

Imagination Showcases Safety-critical GPU Software Driver

Arm Offers Silicon Startups Zero-cost Entry Access to Its IP and Tools For Chip Designs

More News

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top