fbpx

Edge AI and Vision Insights: February 24, 2021 Edition

LETTER FROM THE EDITOR
Dear Colleague,Edge Impulse webinar

Tomorrow, February 25, 2021 at 9 am PT, Edge Impulse will deliver the free webinar “How to Rapidly Build Robust Data-driven Embedded Machine Learning Applications,” in partnership with the Edge AI and Vision Alliance. In this in-depth tutorial, you’ll learn how to build commercial embedded machine learning (ML) applications, in robust implementations ranging from simple sensor configurations to powerful computer vision deployments. Topics covered in the webinar will include:

  • Assembling datasets
  • Designing and training highly accurate models
  • Implementation optimization, and
  • Integration with the remainder of your edge device

The webinar will include live demonstrations of the concepts discussed, and is co-presented by Zach Shelby, CEO and co-founder, and Daniel Situnayake, founding ML engineer, both of Edge Impulse. For more information and to register, please see the event page.

And on Thursday, March 11, 2021 at 9 am PT, GrAI Matter Labs will deliver the free webinar “Brain-inspired Processing Architecture Delivers High Performance, Energy-efficient, Cost-effective AI” in partnership with the Alliance. If a picture is worth a thousand words and a video is worth a million, how do you enable ultra-fast, efficient, and cost-effective edge AI for a world immersed in video? GrAI Matter Labs has developed a processor that delivers the fastest AI per Watt using a technology called NeuronFlow. NeuronFlow is the product of brain-inspired computing, with scalable digital technology that is easily programmable to enable real-time response systems with minimal power consumption, size and cost. In this session, GrAI Matter Labs VP Mahesh Makhijani will present a number of use cases enabled by the company’s partners using its NeuronFlow technology. For more information and to register, please see the event page.

Pieter Abbeel keynoteWe’re pleased to announce our keynote speaker for the 2021 Embedded Vision Summit, the premier conference for innovators adding computer vision and visual AI to products, taking place online May 25-27. UC Berkeley Professor Pieter Abbeel is Director of the Berkeley Robot Learning Lab and Co-Director of the Berkeley Artificial Intelligence (BAIR) Lab. Abbeel is a scholar, entrepreneur, researcher, worldwide speaker, and multi-award winner. His presentation, “From Inference to Action: AI Beyond Pattern Recognition,” is a can’t-miss event! Learn more about Abbeel and his keynote session, along with the other exciting planned presentations and additional activities at the Summit, and then register today with promo code SUPEREARLYBIRD21 to receive your 25%-off Super Early Bird Discount (good through this Friday, February 26)!


The Alliance is now accepting applications for the 2021 Edge AI and Vision Product of the Year Awards competition, with the final round taking place live online during the Embedded Vision Summit. The Edge AI and Vision Product of the Year Awards (an expansion of previous years’ Vision Product of the Year Awards) celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes your leadership in edge AI and computer vision as evaluated by independent industry experts. The application deadline is Friday, March 19; for more information and to enter. please see the program page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

EVOLVING INDUSTRY STANDARDS

Democratizing Computer Vision and Machine Learning with Open, Royalty-Free Standards: OpenVXAMD
OpenVX is a mature computer vision and machine learning API standard by the Khronos Group, developed to be a novel, open and royalty-free standard for cross-platform acceleration. Real-world computer vision and machine learning applications are still in their infancy, and there are many efforts to monopolize this landscape with proprietary hardware and software solutions. Developers need an open royalty-free API to enable them to deploy their applications on any hardware, without worrying about portability and performance optimizations. This would help democratize the application sphere and help accelerate progress in the industry. In this talk, Kiriti Nagesh Gowda, staff engineer in the Machine Learning and Computer Vision Group at AMD and chair of the Khronos OpenVX working group, presents the latest features in OpenVX 1.3 and how these features are being leveraged by OpenVX adopters. He also clears up some misconceptions about OpenVX adoption and usability. In addition, through his analysis of implementations, you will learn about the performance, portability and memory footprint advantages of OpenVX via open-sourced samples.

Khronos Standard APIs for Accelerating Vision and InferencingKhronos
The landscape of processors and tools for accelerating inferencing and vision applications continues to evolve rapidly. Khronos standards, such as OpenCL, OpenVX, SYCL and NNEF, play an increasingly central role in connecting application developers to the latest silicon—productively, efficiently and portably. In this talk, Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, provides an overview and the latest updates on Khronos standards relevant for machine learning and computer vision, and previews how they are likely to evolve in the future.

EDGE INFERENCE PROCESSORS

Lessons Learned from the Deployment of Deep Learning Applications In Edge DevicesHailo
Edge applications have tough, widely varying requirements. In this presentation, Orr Danon, Founder and CEO of Hailo, shares lessons learned from three real-world applications in which Hailo’s deep learning processor is being used to implement deep learning. First, he examines a video analytics use case, where multiple video streams are processed at the edge in real-time. Processing performance demands are substantial, and power consumption is a key constraint. Next, he explores an industrial inspection application, where visual inspection informs the control of physical devices, resulting in a need for low latency. Finally, he presents a smart cities application, where network bandwidth is a key constraint, driving the need for processing near the sensor.

Data Center-Class Inference in Edge Devices at Ultra-Low PowerPerceive
To date, people seeking to deploy machine learning-based inference within consumer electronics have had only two choices, both unattractive, according to Steve Teig, CEO of Perceive, in this presentation. The first option entails transmitting voluminous raw data, such as video, to the cloud, potentially violating customers’ privacy, tempting hackers, and costing substantial energy, money, and latency. The second option runs at the edge, but on severely limited hardware, which can implement only tiny, inaccurate neural networks (e.g., MobileNet) and runs even those tiny networks at low frame rates. Solving this dilemma, Perceive’s new chip, Ergo, runs large, advanced neural networks at high speed for imaging, audio, language, and other applications inside edge devices without any off-chip RAM. Even large networks, such as YOLOv3 with more than 64 million weights, can run at ~250 fps with batch size 1. Moreover, Ergo can run YOLOv3 at 30 fps in about 20 mW.

UPCOMING INDUSTRY EVENTS

How to Rapidly Build Robust Data-driven Embedded Machine Learning Applications – Edge Impulse Webinar: February 25, 2021, 9:00 am PT

Brain-inspired Processing Architecture Delivers High Performance, Energy-efficient, Cost-effective AI – GrAI Matter Labs Webinar: March 11, 2021, 9:00 am PT

Selecting and Combining Deep Learning Model Optimization Techniques for Enhanced On-device AI – Nota Webinar: March 23, 2021, 9:00 am PT

Enabling Small Form Factor, Anti-tamper, High-reliability, Fanless Artificial Intelligence and Machine Learning – Microchip Technologies Webinar: March 25, 2021, 9:00 am PT

Optimizing a Camera ISP to Automatically Improve Computer Vision Accuracy – Algolux Webinar: March 30, 2021, 9:00 am PT

More Events

FEATURED NEWS

Renesas Develops Automotive SoC Functional Safety Technologies for CNN Accelerator Cores and ASIL D Control Combining Performance and Power Efficiency

Vision Components’ VC picoSmart, Only Slightly Larger Than a Typical Image Sensor Module, Contains All Components Necessary for Image Processing

Europe-based Unikie Joins the Edge AI and Vision Alliance to Extend Its Technology Reach to the U.S. and Other Global Markets

Videantis Passes Milestone, Enabling 10 Million Production Vehicles

The Arm Ecosystem Ships 6.7 Billion Arm-based Chips in a Single Quarter

More News

VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Intel DevCloud for the Edge (Best Developer Tool)Intel
Intel’s DevCloud for the Edge is the 2020 Vision Product of the Year Award winner in the Developer Tools category. The Intel DevCloud for the Edge allows you to virtually prototype and experiment with AI workloads for computer vision on the latest Intel edge inferencing hardware, with no hardware setup required since the code executes directly within the web browser. You can test the performance of your models using the Intel Distribution of OpenVINO Toolkit and combinations of CPUs, GPUs, VPUs and FPGAs. The site also contains a series of tutorials and examples preloaded with everything needed to quickly get started, including trained models, sample data and executable code from the Intel Distribution of OpenVINO Toolkit as well as other deep learning tools. Please see here for more information on Intel and its DevCloud for the Edge.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top