fbpx

Edge AI and Vision Insights: January 24, 2024 Edition

LETTER FROM THE EDITOR
Dear Colleague,2024 Embedded Vision Summit

Our team is hard at work on the 2024 Embedded Vision Summit program and we’re delighted to announce our first set of speakers:

  • Craig Buntin (SPORTLOGiQ)
  • Toly Kotlarsky (Zebra Technologies)
  • Amit Mate (GMAC Intelligence)
  • Harro Stokman (Kepler Vision Technologies)
  • Himanshu Vajaria (365 Retail Markets) and
  • Rutger Vrigen (McKinsey & Company)

A great start to what will end up being 100+ sessions! Registration for the Summit is now open and you can save 25% now with code SUMMIT24-SEB. This is the best price you’ll be able to get on the Embedded Vision Summit, taking place May 21-23 in Santa Clara, California. Register now and share the news with your colleagues!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEEP LEARNING MODEL OPTIMIZATION

A Survey of Model Compression MethodsInstrumental
One of the main challenges when deploying computer vision models to the edge is optimizing the model for speed, memory and energy consumption. In this presentation, Rustem Feyzkhanov, Staff Machine Learning Engineer at Instrumental, provides a comprehensive survey of model compression approaches, which are crucial for harnessing the full potential of deep learning models on edge devices. Feyzkhanov explores pruning, weight clustering and knowledge distillation, explaining how these techniques work and how to use them effectively. He also examines inference frameworks, including ONNX, TFLite and OpenVINO. Feyzkhanov discusses how these frameworks support model compression and explores the impact of hardware considerations on the choice of framework. He concludes with a comparison of the techniques presented, considering implementation complexity and typical efficiency gains.

Practical Approaches to DNN QuantizationMagic Leap
Convolutional neural networks, widely used in computer vision tasks, require substantial computation and memory resources, making it challenging to run these models on resource-constrained devices. Quantization involves modifying CNNs to use smaller data types (e.g., switching from 32-bit floating-point values to 8-bit integer values). Quantization is an effective way to reduce the computation and memory bandwidth requirements of these models, and their memory footprints, making it easier to run them on edge devices. However, quantization does degrade the accuracy of CNNs. In this talk, Dwith Chenna, Senior Embedded DSP Engineer for Computer Vision at Magic Leap, surveys practical techniques for CNN quantization and shares best practices, tools and recipes to enable you to get the best results from quantization, including ways to minimize accuracy loss.

LIDAR IN COMPUTER VISION

Introduction to Modern LiDAR for Machine PerceptionUniversity of Ottawa
In this talk, Robert Laganière, Professor at the University of Ottawa and CEO of Sensor Cortek, provides an introduction to light detection and ranging (LiDAR) technology. He explains how LiDAR sensors work and their main advantages and disadvantages. He also introduces different approaches to LiDAR, including scanning and flash LiDAR. Laganière explores the types of data produced by LiDAR sensors and explains how this data can be processed using deep neural networks. He also examines the synergy between LiDAR and cameras, and the concept of pseudo-LiDAR for detection.

LiDAR Technologies and Markets: What’s Changing?Yole Group
LiDAR technologies and markets are changing fast. Recent years have seen rapid innovation in many diverse types of LiDAR technologies. For example, LiDAR suppliers use a variety of different optical wavelengths, beam steering mechanisms and detector types. At the same time, with volume production of autonomous vehicles still far off in the future, many LiDAR suppliers have shifted their market focus, creating new products to tackle markets such as logistics, security and smart cities. In this presentation, Florian Domengie, Senior Technology and Market Analyst at Yole Intelligence (part of the Yole Group), shares highlights from the Yole Group’s recent analysis of LiDAR technologies and markets. He explores market forecasts and identifies which LiDAR markets are growing fastest. He also examines which LiDAR technologies are best positioned to succeed in these high-growth markets today, and how the competitive landscape is evolving.

UPCOMING INDUSTRY EVENTS

Mastering Image Quality: The Power of Imaging Signal Processors in Embedded Vision – e-con Systems Webinar: January 24, 2024, 9:00 am PT

Optimizing Camera Design for Machine Perception Via End-to-end Camera Simulation – Immervision Webinar: February 6, 2024, 9:00 am PT

Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events

FEATURED NEWS

FRAMOS Launches FSM:GO, Its Next-generation Embedded Sensor Module Simplifying Vision Systems Development

DEEPX Unveils Four AI Chips Intended to Transform the On-device AI Market

NVIDIA Brings Generative AI to the Masses Via Its Tensor Core GPUs, LLMs and Tools

Ambarella Launches a Comprehensive Edge AI Developer Platform

Texas Instruments Debuts New Automotive Chips Enabling Manufacturers to Create Smarter, Safer Vehicles

More News

EDGE AI AND
VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Synopsys ARC NPX6 NPU IP (Best Edge AI Processor)Synopsys
Synopsys’ ARC NPX6 NPU IP is the 2023 Edge AI and Vision Product of the Year Award winner in the Edge AI Processors category. The ARC NPX6 NPU IP is an AI inference engine optimized for the latest neural network models – including newly emerging transformers – which can scale from 8 to 3,500 TOPS to meet the demands of AI applications requiring real-time compute with ultra-low power consumption for consumer and safety-critical automotive applications. Key innovations of the NPX6 NPU IP start with a highly optimized 4K MAC building block with enhanced utilization, new sparsity features and hardware support for transformer networks. The (optional) integrated floating-point data types (FP16/BF16) are embedded in the existing 8-bit data paths to minimize area increase and maximize software flexibility. Scaling is accomplished with an innovative interconnect supporting scales of up to 24 4K MAC cores for 96K MAC (440 TOPS with sparsity) single-engine performance. Up to eight engines can be combined to achieve up to 3,500 TOPS performance on a single SoC. The NPX6 also expands on the ISO 26262 functional safety features of its predecessor, the EV7x vision processor. Both the NPU IP and the new ARC MetaWare MX Development Toolkit integrate connectivity features that enable implementation of multiple NPU instances to achieve up to 3,500 TOPS performance on a single SoC.

Please see here for more information on Synopsys’ ARC NPX6 NPU IP. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top