fbpx

Edge AI and Vision Insights: June 5, 2024

LETTER FROM THE EDITOR
Dear Colleague,Yole Group webinar

On Thursday, July 11, 2024 at 9 am PT, the Yole Group will deliver the free webinar “The Rise of Neuromorphic Sensing and Computing: Technology Innovations, Ecosystem Evolutions and Market Trends” in partnership with the Edge AI and Vision Alliance. The neuromorphic sensing and computing markets are projected to experience substantial growth in the coming years, reaching a combined value of $8.4B by 2034. Mobile applications will drive the neuromorphic sensing market, while data center applications will lead in the neuromorphic computing market. The neuromorphic ecosystem is maturing, with key players and startups adopting diverse strategies to capitalize on market opportunities.

Neuromorphic technologies, inspired by biological brains, offer power-efficient solutions for AI tasks, addressing the declining economic feasibility of scaling semiconductor devices. These technologies provide benefits such as low latency, high scalability, and online learning, enabling real-time edge-AI applications and addressing privacy concerns. Neuromorphic sensing options include standalone event-based sensors and hybrid sensors combining RGB and event-based pixels. Neuromorphic computing systems, featuring event-driven processing and spiking neural network algorithms, enable online learning and autonomous robotics.

This webinar, co-presented by Adrien Sanchez and Florian Domengie, senior technology and market analysts at the Yole Group, will delve into the latest advancements in neuromorphic sensing and computing technologies and their applications across various industries, offering insights into the future of sustainable and efficient AI processing at the edge. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEEP LEARNING MODEL OPTIMIZATION TECHNIQUES

Quantizing Convolutional Neural Networks for Practical DeploymentMagic Leap
The fusion of artificial intelligence and edge computing is revolutionizing real-time data processing. Embedded vision and edge AI take center stage, offering unparalleled potential for accuracy and efficiency at the edge. However, the challenge lies in executing AI tasks on resource-limited edge devices. Model compression techniques, notably quantization, emerge as crucial solutions to address this complexity, optimizing computational power and memory usage. This three-part series of technical articles from Dwith Chenna, member of the technical staff and product engineer for AI inference at AMD (formerly senior computer vision engineer at Magic Leap), explores topics such as translating theory to practice, various quantization schemes and post-compression error analysis.

A Guide to Optimizing Transformer-based Models for Faster InferenceTryolabs
If you’ve been keeping up with the fast-moving world of AI, you know that in recent years transformer models have taken over the state-of-the-art in many computer vision, natural language processing, time series analysis, and other vital tasks. These models tend to be large and slow, requiring billions of operations to process inputs. Ultimately, these shortcomings threaten to degrade user experiences, raise hardware implementation costs and increase energy consumption. This tutorial from Tryolabs shows how to optimize and deploy transformer models to improve inference speed up to x10, a particularly beneficial improvement in embedded systems with limited memory, computing power, and battery life.

MODELS AND TOOLSETS FOR ENHANCED DEVELOPMENT

The Foundation Models Reshaping Computer VisionTenyks
Initially limited to language tasks, foundation models can now serve as the backbone of computer vision tasks such as image classification, object detection, and image segmentation, enabling machines that perceive and understand the visual world with unprecedented accuracy. With the rapid evolution and proliferation of these models, it’s useful to establish a taxonomy that categorizes and organizes them based on their architecture, capabilities and underlying principles. In this tutorial, Tenyks delves into the world of foundation models for computer vision, exploring the key concepts, techniques, and architectures that form their foundations and empower them to excel in their respective tasks.

DeepStream 7.0 Release Powers Next-generation Vision AI DevelopmentNVIDIA
NVIDIA’s DeepStream SDK unlocks GPU-accelerated building blocks to build end-to-end vision AI pipelines. With more than 40 plugins available off-the-shelf, developers can deploy fully optimized pipelines for advanced AI inference, object tracking, and seamless integration with popular IoT message brokers such as REDIS, Kafka, and MQTT. DeepStream offers intuitive REST APIs to control AI pipelines even at the edge. The latest DeepStream 7.0 release is crafted to empower developers with brand-new capabilities in the era of generative AI. This article details the SDK’s various features designed to accelerate the development of next-generation applications.

UPCOMING INDUSTRY EVENTS

The Rise of Neuromorphic Sensing and Computing: Technology Innovations, Ecosystem Evolutions and Market Trends – Yole Group Webinar: July 11, 2024, 9:00 am PT

Who is Winning the Battle for ADAS and Autonomous Vehicle Processing, and How Large is the Prize? – TechInsights Webinar: July 24, 2024, 9:00 am PT

More Events

FEATURED NEWS

The SHD Group will Release a Complimentary Edge AI Processor and Ecosystem Report in Q3 2024 in Collaboration with the Edge AI and Vision Alliance

eYs3D Microelectronics Unveils the SenseLink Multi-camera System, Providing a Versatile Machine Vision Sensing Solution for Smart Applications

Qualcomm’s AI Developer Hub Expands to Support On-device Applications for Snapdragon-powered PCs

Ambarella’s Next-generation AI SoCs for Fleet Dash Cams and Vehicle Gateways Enable Vision Language Models and Transformer Networks Without Fan Cooling

Lattice Semiconductor Introduces an Advanced 3D Sensor Fusion Reference Design for Autonomous Applications

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top