Edge AI and Vision Insights: August 17, 2022 Edition

LETTER FROM THE EDITOR
Dear Colleague,Intel webinar

Next Thursday, August 25 at 9 am PT, Intel will deliver the free webinar “Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code” in partnership with the Edge AI and Vision Alliance. Are you using Google’s TensorFlow framework to develop your deep learning models? And are you doing inference processing on those models using Intel compute devices: CPUs, GPUs, VPUs and/or HDDL (High Density Deep Learning) processing solutions? If the answer to both questions is “yes”, then this hands-on tutorial on how to integrate TensorFlow with Intel’s Distribution of the OpenVINO toolkit for rapid development while also achieving accurate and high-performance inference results is for you!

TensorFlow developers can now take advantage of OpenVINO optimizations with TensorFlow inference applications across a wide range of Intel compute devices by adding just two lines of code. In addition to introducing OpenVINO and its capabilities, the webinar will include demonstrations of the concepts discussed via a code walk-through of a sample application. It will be presented by Kumar Vishwesh, Technical Product Manager, and Ragesh Hajela, Senior AI Engineer, both of Intel. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.

And the following Tuesday, August 30 at 9 am PT, Edge Impulse will deliver the free webinar “Edge Impulse’s FOMO Technology and Sony’s Computer Vision Platform: A Compelling Combination” in partnership with the Edge AI and Vision Alliance. Edge Impulse’s FOMO (Faster Objects, More Objects), introduced earlier this year, is a brand new approach to running object detection models on resource-constrained devices. This ground-breaking algorithm brings real-time object detection, tracking and counting to microcontrollers, such as Sony’s Spresense product line, for the first time. Sony’s latest multicore Spresense microcontrollers, in combination with the company’s high-resolution image sensor and camera portfolios as well as global LTE connectivity capabilities, create robust computer vision hardware platforms.

In this webinar, you’ll learn how Edge Impulse’s software expertise and products unlock this hardware potential to deliver an optimized total solution for agriculture technology, industrial IoT, smart cities, remote monitoring and other application opportunities. The webinar will be presented by Jenny Plunkett, Senior Developer Relations Engineer at Edge Impulse, and Armaghan Ebrahimi, Partner Solutions Engineer at Sony Electronics Professional Solutions Americas. Plunkett and Ebrahimi will introduce their respective companies’ technologies and products, as well as explain how they complement each other in delivering enhanced edge machine learning, computer vision and IoT capabilities. The webinar will include demonstrations of the concepts discussed, detailing how to bring to life applications with needs for sensor analysis, machine learning, image processing and data filtering. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

NEUROMORPHIC SENSING AND PROCESSING

Event-Based Neuromorphic Perception and Computation: The Future of Sensing and AIRyad Benosman
We say that today’s mainstream computer vision technologies enable machines to “see,” much as humans do. We refer to today’s image sensors as the “eyes” of these machines. And we call our most powerful algorithms deep “neural” networks. In reality, the principles underlying current mainstream computer vision are completely different from those underlying biological vision. Conventional image sensors operate very differently from eyes found in nature, and there’s virtually nothing “neural” about deep neural networks.

Can we gain important advantages by implementing computer vision using principles of biological vision? Professor Ryad Benosman thinks so. Mainstream image sensors and processors acquire and process visual information as a series of snapshots recorded at a fixed frame rate, resulting in limited temporal resolution, low dynamic range and a high degree of redundancy in data and computation. Nature suggests a different approach: Biological vision systems are driven and controlled by events within the scene in view, and not – like conventional techniques – by artificially created timing and control signals that have no relation to the source of the visual information. The term “neuromorphic” refers to systems that mimic biological processes.

In this 2022 Embedded Vision Summit keynote, Professor Benosman — a pioneer of neuromorphic sensing and computing — introduces the fundamentals of bio-inspired, event-based image sensing and processing approaches, and explores their strengths and weaknesses. He shows that bio-inspired vision systems have the potential to outperform conventional, frame-based systems and to enable new capabilities in terms of data compression, dynamic range, temporal resolution and power efficiency in applications such as 3D vision, object tracking, motor control and visual feedback loops.

Are Neuromorphic Vision Technologies Ready for Commercial Use?Neuromorphic Panel
Neuromorphic vision–vision systems inspired by biological systems–promise to save power and improve latency in a variety of edge and endpoint applications. After many years of research and development, are these technologies ready to move out of the lab and into today’s electronic systems and products? What challenges do neuromorphic vision sensors and neuromorphic computing chips face when entering a market saturated by classical and deep learning-driven computer vision systems, and how can these challenges be overcome? Are both neuromorphic sensors and neuromorphic processors required for success, and what is the right hardware for today’s systems? Are spiking neural networks required and are they ready for commercial deployment? What sort of industry ecosystem will be required to enable these technologies to become widely used?

This lively 2022 Embedded Vision Summit panel discussion provides perspectives on these and other topics from a panel of seasoned experts who are working at the leading edge of neuromorphic vision development, tools and techniques. Sally Ward-Foxton, European Correspondent for EE Times, moderates; other panelists include Garrick Orchard, Research Scientist at Intel Labs, James Marshall, Chief Scientific Officer at Opteran, Ryad Benosman, Professor at the University of Pittsburgh and Adjunct Professor at the CMU Robotics Institute, and Steve Teig, Founder and CEO of Perceive.

EVALUATING AND SELECTING DEEP LEARNING MODELS

Is My Model Performing Well? It Depends…BMW Group
There are many statistical metrics used to measure the performance of machine learning models. While they work well when the model itself is the final product, classical measures are often not enough when the model is a part of a more complex system. In this 2021 Embedded Vision Summit presentation, Vladimir Haltakov, Self-Driving Car Engineer at BMW Group, focuses on products where machine learning models do not generate the final output. The challenge in these cases is to define a metric that accurately captures the effect of different failures of the model on the final system performance. This becomes even more difficult when there are conflicting requirements within the system. Haltakov discusses strategies to deal with these problems and shares insights from real-world examples.

Applying the Right Deep Learning Model with the Right Data for Your ApplicationVision Elements
Deep learning has made a huge impact on a wide variety of computer vision applications. But while the capabilities of deep neural networks are impressive, understanding how to best apply them is not straightforward. In this 2021 Embedded Vision Summit talk, Hila Blecher-Segev, Computer Vision and AI Research Associate at Vision Elements, highlights key questions that must be answered when considering incorporating a deep neural network into a vision application. What type of data will be most beneficial for the task? Should the DNN use other types of data in addition to images? How should the data be annotated? What classes should be defined? What is the minimum amount of data needed for the network to be generalized and robust? What algorithmic approach should we use for our task (classification, regression or segmentation)? What type of network should we choose (FCN, DCNN, RNN, GAN)? Blecher-Segev explains the options and trade-offs, and maps out a process for making good choices for a specific application.

UPCOMING INDUSTRY EVENTS

Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code – Intel Webinar: August 25, 2022, 9:00 am PT

Edge Impulse’s FOMO Technology and Sony’s Computer Vision Platform: A Compelling Combination – Edge Impulse Webinar: August 30, 2022, 9:00 am PT

More Events

FEATURED NEWS

Sequitur Labs Provides Chip-to-Cloud Embedded Security in Support of New NVIDIA Jetson AGX Orin Platform

Edge Impulse Releases Deployment Support for BrainChip Akida Neuromorphic Processor IP Core

Ambarella Partners with Inceptio Technology to Deliver Level 3 Autonomous Trucking, Including Surround Camera and Front ADAS Perception With AI Compute

STMicroelectronics’ New Inertial Modules Enable AI Training Inside the Sensor

Imagination Technologies Launches the IMG RTXM-2200, Its First Real-time Embedded RISC-V CPU

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Grabango Checkout-free Technology (Best Enterprise Edge AI End Product)Grabango
Grabango’s Checkout-free Technology is the 2022 Edge AI and Vision Product of the Year Award winner in the Enterprise Edge AI End Product category. Grabango’s system for existing, top-tier grocery and convenience stores is pure computer vision (CV), based on machine learning with targeted retrainings (AI). With a handful of other startups boasting a basic proof-of-concept here and there, Grabango has leapt ahead to announce 5 mature stores with Giant Eagle, 6 stores with Circle K, and 10 stores with bp, with additional announcements pending from two more convenience chains, and two major grocery chains. All of these stores are fast retrofits, serving the same customers as before installation. Since going live, Grabango remains the only provider delivering high-volume operations, in true retrofit settings, with exceptionally accurate receipts.

Please see here for more information on Grabango’s Checkout-free Technology. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top