fbpx

Edge AI and Vision Insights: November 4, 2020 Edition

LETTER FROM THE EDITOR
Dear Colleague,Computer Vision Developer Survey

Every year, the Edge AI and Vision Alliance surveys developers to understand what chips and tools they use to build visual AI systems. This is our seventh year conducting the survey, and we would like to get your opinions.

We share the results from the Computer Vision Developer Survey at Edge AI and Vision Alliance events and in white papers and presentations made available throughout the year on the Alliance website. Results from last year’s survey are available in this white paper. I’d really appreciate it if you’d take a few minutes to complete this year’s survey. (It typically takes less than 15 minutes to complete.) We are keeping the survey open through the end of this week! Don’t miss your chance to have your voice heard.

As a thank-you, we will send you a coupon for $50 off the price of a two-day Embedded Vision Summit ticket (to be sent when registration opens). In addition, we will enter your completed survey into a drawing for one of three Amazon gift cards worth $100! Thank you in advance for your perspective. Fill out the survey.


On Tuesday, December 15 at 9 am PT, Yole Développement will deliver the free webinar “Sensor Fusion for Autonomous Vehicles” in partnership with the Edge AI and Vision Alliance. ADAS systems in vehicles have proven to reduce road fatalities, alert the driver to potential problems and avoid collisions. The recent availability of more powerful computing chips and sensors has enabled the development of even more advanced functions, expanding beyond safety assistance to incorporate increasingly automated driving capabilities. The implementation of these autonomous features requires the use of more sensors, more computing power and a more complex electric/electronic (E/E) system architecture. In this presentation, Yole Développement will describe the increasing need for, along with the “fusion” coordination among, sensors for autonomous devices for both automotive and industrial applications. The presentation will cover topics such as cameras, radar, LiDAR, E/E architectures and domain controllers. For more information and to register, please see the event page.

And on Thursday, December 17 at 9 am PT, BrainChip will deliver the free webinar “Power-efficient Edge AI Applications through Neuromorphic Processing,” also in partnership with the Edge AI and Vision Alliance. Many edge AI processors take advantage of the spatial sparsity in neural network models to eliminate unnecessary computations and save power. Neuromorphic processors achieve further savings by performing event-based computation, which exploits the temporal sparsity inherent in data generated by audio, vision, olfactory, lidar, and other edge sensors. This presentation will provide an update on the AKD1000, BrainChip’s first neural network SoC, and describe the advantages of processing information in the event domain. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEEP LEARNING INFERENCE IMPROVEMENTS

Real-Time Vehicle DetectionXailient
From vehicle counting and smart parking systems to ADAS (and eventually, full autonomy), notes Xailient in this technical tutorial, the demand for detecting cars, buses, and motorbikes is increasing and soon will be as common an application as face detection. And of course, these algorithms need to not only be accurate but also must run in realtime to be usable in real-world applications. AI engineer Sabina Pokhrel describes how to implement a detection and identification algorithm using both MobileNet SSD and Xalient-based pre-trained models and compares their relative performance and resource requirements.

Deep Neural Network Model OptimizationDeeplite
Today’s deep learning solutions require increasingly extensive and precise deep neural network (DNN) models with considerably more layers and parameters than before. Data collected by Deeplite and reported in this blog post, the first in a planned series from the company, shows how improvements in model accuracy have historically correlated to increases in memory size, computational cost, the number of parameters, and inference time. This situation is becoming increasingly problematic, especially for resource-constrained embedded implementations and systems that require real-time response times. Compressing and otherwise optimizing these DNN architectures is necessary in order to prevent memory, time, hardware, and energy constraints from hindering adoption.

AI FOR IMAGE ENHANCEMENT

A Zero-Effort Way to Improve Image QualityNVIDIA
The virtual reality (VR) industry is in the midst of a new hardware cycle, writes NVIDIA in this technical article, with higher resolution headsets and better optics being the key focus points for the device manufacturers. On the software front, there has been a wave of content-rich applications and an emphasis on flawless VR experiences. Variable Rate Supersampling (VRSS) expands on the Turing GPU architecture’s Variable Rate Shading (VRS) feature to deliver image quality improvements by performing selective supersampling, engaged only when idle GPU cycles are available. VRSS is completely handled from within the NVIDIA display driver without application developer effort. And the underlying techniques described are more broadly applicable to other applications.

Delivering on the Promise of 4K Content with AI-based ScalingSynaptics
The momentum behind ultra-high definition, 4K displays is one of the more interesting growth areas in all of electronics, writes Synaptics in this blog post. Unfortunately, content providers have been slow to make true 4K content available. Television broadcasters and IPTV service providers simply don’t have the bandwidth to provide numerous high-quality Ultra HD streams at once, so they have instead been encoding Full HD (1080p) content. Traditional video up-scalers cannot recover the finer details and textures that were in the original source, lost during downscaling to Full HD. However, AI-based super resolution — an emerging technique using deep-learning inference to enhance the perceived resolution of an image beyond the resolution of the input data — can give viewers a compelling Ultra HD experience on their 4K TVs based on a Full HD-resolution source.

UPCOMING INDUSTRY EVENTS

Yole Développement Webinar – Sensor Fusion for Autonomous Vehicles: December 15, 2020, 9:00 am PT

BrainChip Webinar – Power-efficient Edge AI Applications through Neuromorphic Processing: December 17, 2020, 9:00 am PT

More Events

FEATURED NEWS

Arm’s Latest Neural Network Processor Core Expands the Company’s AI IP Portfolio

STMicroelectronics Introduces an All-in-one, Multi-zone, Direct Time-of-flight Module

OmniVision Technologies’ New Medical RGB-IR Image Sensor Reduces Endoscope Size, Cost, Power and Heat

Qualcomm Announces First Shipments of the Cloud AI 100 Accelerator and Edge Development Kit

Vision Components’ Snapdragon 410-based VC DragonCam Development Board is Now Shipping

More News

VISION PRODUCT OF THE YEAR WINNER SHOWCASE

iniVation Dynamic Vision Platform (Best Camera or Sensor)iniVation
iniVation’s Dynamic Vision Platform is the 2020 Vision Product of the Year Award Winner in the Cameras and Sensors category. The Dynamic Vision Platform is the world’s most advanced complete neuromorphic vision system, enabling solution developers to create systems with unprecedented performance. It combines patented sensor technology with a high-performance software toolkit. The sensor emulates key aspects of the human retina, transmitting only pixel-level changes at microsecond time resolution and low system-level power.

Please see here for more information on iniVation and its Dynamic Vision Platform. The Vision Product of the Year Awards are open to Member companies of the Edge AI and Vision Alliance and celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes leadership in computer vision as evaluated by independent industry experts.

EMBEDDED VISION SUMMIT MEDIA PARTNER SHOWCASE

EE TimesEE Times
EE Times—part of the AspenCore collection—is a respected news website that cuts through the industry noise by delivering original reporting, trusted analysis, and a diversity of industry voices to design engineers and management executives. With an expanding base of expert contributors, guided by award-winning editors, community leaders and journalists, EE Times’ singular mission is to serve as your guide to what’s really important in the global electronics industry. Register here to get your free eNewsletters.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top