fbpx

Edge AI and Vision Insights: April 14, 2020 Edition

LETTER FROM THE EDITOR
Dear Colleague,Industry Map

Today hundreds of companies are developing embedded vision and visual AI building-block technologies—such as processors, algorithms and camera modules—and thousands of companies are creating systems and solutions incorporating these technologies. With so many companies in the space, and new companies entering constantly, it has become difficult to find the companies that match a specific profile or need. The Edge AI and Vision Alliance, in partnership with Woodside Capital Partners, has created the Embedded Vision and Visual AI Industry Map to address this challenge.

The Industry Map is a free-to-use tool that provides an efficient way to understand how hundreds of companies fit into the vision industry ecosystem. It is an interactive, visual database that displays companies within different layers of the vision value chain, and in specific end-application markets. It covers the entire embedded vision and visual AI value chain, from sub-components to complete systems. From our immersion in the embedded vision and visual AI industry over the past nearly nine years, we know that the right company-to-company partnerships are essential. We’re excited to provide this Industry Map to help people efficiently find the companies they want to partner with—and to help companies make themselves more visible. Check it out at www.edge-ai-vision.com/resources/industrymap; we welcome your feedback!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

FACIAL ANALYSIS

AI-powered Identity: Evaluating Face Recognition CapabilitiesUniversity of Houston
Following the deep learning renaissance, the face recognition community has achieved remarkable results when comparing images that are both frontal and non-occluded. However, significant challenges remain in the presence of variations in pose, expression, illumination and occlusions. This presentation from Ioannis Kakadiaris, Distinguished University Professor of Computer Science at the University of Houston, highlights the state-of-the-art in face recognition and provides insights on how to properly evaluate and select face recognition modules for embedded systems.

Eye Tracking for the Future: The Eyes Have ItParallel Rules
Eye interaction technologies complement augmented and virtual reality head-mounted displays. In this presentation, Peter Milford, President of Parallel Rules, reviews eye tracking technology, concentrating mainly on camera-based solutions and associated system requirements. Wearable eye tracking use cases include foveated rendering, inter pupilary distance measurement, gaze tracking and user interface control.

DEEP LEARNING DEVELOPMENT TOOLS

Deploying Deep Learning Models on Embedded Processors for Autonomous Systems with MATLABMathWorks
In this presentation, Sandeep Hiremath, Product Manager, and Bill Chou, Senior Computer Vision Scientist, both of MathWorks, explain how to bring the power of deep neural networks to memory- and power-constrained devices like those used in robotics and automated driving. The workflow starts with an algorithm design in MATLAB, which enjoys universal appeal among engineers and scientists because of its expressive power and ease of use. The algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB. Next, the networks are trained using MATLAB’s GPU and parallel computing support—either on the desktop, a local compute cluster or in the cloud. In the deployment phase, code generation tools are employed to automatically generate optimized code that can target embedded GPUs like Jetson Drive AGX Xavier, Intel-based CPU platforms or Arm-based embedded platforms. The generated code leverages target-specific libraries that are highly optimized for the target architecture and memory model.

Tools and Techniques for Optimizing DNNs on Arm-based Processors with Au-Zone’s DeepView ML ToolkitAu-Zone Technologies
Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, describes methods and tools for developing, profiling and optimizing neural network solutions for deployment on Arm MCUs, CPUs and GPUs using Au-Zone’s DeepView ML Toolkit in this presentation. He introduces the need for optimization to enable efficient deployment of deep learning models, and highlights the specific challenges of profiling and optimizing models for deployment in cost- and energy-constrained systems. Taylor shows how Au-Zone’s DeepView tools can be used in conjunction with Arm’s Streamline tools to gain detailed insights into the performance of neural networks on ARM-based SoCs. Using a facial recognition solution as an example, he explores how to evaluate, profile and optimize deep learning models on a Cortex-M7 MCU, a Cortex-A73/A53 big.LITTLE CPU and a MALI G-71 GPU.

FEATURED NEWS

CEVA Announces a High Performance Sensor Hub DSP Architecture

BrainChip Introduces an Event-based Neural Network IP and NSoC Device

Perceive Corporation Launches to Deliver Data Center-Class Accuracy and Performance at Ultra-Low Power for Consumer Devices

Allied Vision’s Alvium Camera Kit for the NVIDIA Jetson Nano Developer Kit is Now Available

MVTec Presents New and Optimized Features for Machine Vision with HALCON 20.05

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top