fbpx

Edge AI and Vision Insights: April 3, 2024

LETTER FROM THE EDITOR
Dear Colleague,Qualcomm Deep Dive

We’re excited to announce that Qualcomm will be offering an in-depth Deep Dive session on Tuesday afternoon, May 21 at the Embedded Vision Summit. In this workshop, Qualcomm will address the common challenges faced by developers migrating AI workloads from workstations to edge devices. Qualcomm simplifies this transition by supporting familiar frameworks and providing tools to optimize performance and power consumption. You’ll learn how to deploy optimized models using the Qualcomm AI Hub, streamlining the process and bringing your AI applications to life in minutes.

This session is ideal for machine learning engineers and AI/ML developers creating Android, Windows, and IoT/AIoT applications. Bring your curiosity and questions — but note that separate registration is required! Speaking of which, register now for the Summit, taking place May 21-23 in Santa Clara, California, using code SUMMIT24-NL for a 15% discount on your conference pass.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEEP LEARNING TUTORIALS AND ADVANCED TECHNIQUES

An Introduction to Computer Vision with CNNsMohammad Haghighat
This 2023 Embedded Vision Summit presentation covers the basics of computer vision using convolutional neural networks. Independent consultant Mohammad Haghighat begins by introducing some important conventional computer vision techniques, then transitioning to explaining the basics of machine learning and convolutional neural networks (CNNs) and showing how CNNs are used in visual perception. Haghighat illustrates the building blocks and computational elements of neural networks through examples. This talk provides an overview of how modern computer vision algorithms are designed, trained and used in real-world applications.

Efficient Many-function Video Machine Learning at the EdgeCisco Systems
Video streams are so rich, and video workloads are so sophisticated, that we may now expect video machine learning (ML) to supply many simultaneous insights and transformations. It will be increasingly common to need video segmentation, object and motion recognition, SLAM, 3D model extraction, relighting, avatarization and neural compression in parallel. Conventionally, this combination would overwhelm edge compute resources, but novel multi-headed ML models and unified video pipelines make this feasible on existing personal devices and embedded compute subsystems. In this 2023 Embedded Vision Summit talk, Chris Rowen, Vice President of AI Engineering for Webex Collaboration at Cisco Systems, discusses the goals for advanced video intelligence in secure, edge-powered video communications, and shows how new model structures can achieve very high accuracy, resolution and frame rate at low cost per function. He also discusses improved objective and subjective quality metrics, training set synthesis and his company’s optimized portable edge implementation methodology. Rowen wraps up with some observations on the challenges of even larger video workloads at the edge.

HARDWARE AND SOFTWARE INTERFACE ENHANCEMENTS

MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision SystemsMIPI Alliance
As computer vision applications continue to evolve rapidly, there’s a growing need for a smarter standardized interface connecting multiple image sensors to processors for real-time perception and decision-making. In this 2023 Embedded Vision Summit talk, Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, provides a deep dive into the latest version of the widely implemented CSI-2 v4.0 interface. This new version includes key features specifically designed to support computer vision applications, including democratized Smart Region of Interest, Always-On Sentinel Conduit, Multi-Pixel Compression and Latency Reduction and Transport Efficiency. These novel features enable sophisticated machine awareness with reduced system power and processing needs, making them well suited for consumer, commercial and infrastructure platforms.

Building Accelerated GStreamer Applications for Video and Audio AIWave Spectrum
GStreamer is a popular open-source framework for creating streaming media applications. Developers often use GStreamer to streamline the development of computer vision and audio perception applications. Since perceptual algorithms are often quite demanding in terms of processing performance, in many cases developers need to find ways to accelerate key GStreamer building blocks, taking advantage of specialized features of their target processor or co-processor. In this 2023 Embedded Vision Summit presentation, Abdo Babukr, Accelerated Computing Consultant at Wave Spectrum, introduces GStreamer and shows how to use it to build computer vision and audio perception applications. He also shows how to create efficient, high-performance GStreamer applications that utilize specialized hardware features.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events

FEATURED NEWS

e-con Systems Launches a New High Resolution Global Shutter Camera for Precision Imaging at High Frame Rates

Intel Announces a New Enablement Program for AI PC Software Developers and Hardware Vendors

Efinix Unveils a New Line of FPGAs to Accelerate and Adapt Automotive Designs and Applications

Blaize Releases the Picasso Analytics Framework and Toolkit

Vision Components’ New IMX900 3.2 Megapixel MIPI Camera and Updated Power SoM Accelerator Enhance Its Embedded Vision Offerings

More News

EMBEDDED VISION SUMMIT SPONSOR SHOWCASE

Attend the Embedded Vision Summit to meet these and other leading computer vision and edge AI technology suppliers!

Network OptixNetwork Optix
Network Optix is revolutionizing the computer vision landscape with an open development platform that’s far more than just IP software. Nx Enterprise Video Operating System (EVOS) is a video-powered, data-driven operational management system for any and every type of organization. An infinite-scale, closed-loop, self-learning business operational intelligence and execution platform. An operating system for every vertical market. Just add video.

QualcommQualcomm
Qualcomm is enabling a world where everyone and everything can be intelligently connected. Our one technology road map allows us to efficiently scale the technologies that launched the mobile revolution—including advanced connectivity, high-performance, low-power compute, on-device intelligence and more—to the next generation of connected smart devices across industries. Innovations from Qualcomm and our family of Snapdragon platforms will help enable cloud-edge convergence, transform industries, accelerate the digital economy and revolutionize how we experience the world, for the greater good.

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Conservation X Labs Sentinel Smart Camera (Best Consumer Edge AI End Product)Conservation X Labs
Conservation X Labs’ Sentinel Smart Camera is the 2023 Edge AI and Vision Product of the Year Award winner in the Consumer Edge AI End Products category. The Sentinel Smart Camera is an AI-enabled field monitoring system that can help better understand and protect wildlife and the people with it in the field. Sentinel is the hardware and software base of a fully-integrated AI camera platform for wildlife conservation and field research. Traditionally, remote-camera solutions are challenged by harsh conditions, access to power, and data transmission, often making it difficult to access information in an actionable timeframe. Sentinel applies AI to modern sensors and connectivity to deploy a faster, longer-running, more effective option straight out of the box. Running onboard detection algorithms, Sentinel doesn’t just passively collect visual data, it can autonomously detect and address the greatest threats on the frontlines of the biodiversity crisis, including poaching and wildlife trafficking, invasive species, and endangered species. This robust technology gives conservationists real-time information on events in the wild and the ability to respond to these threats through smart, data-driven decisions.

Please see here for more information on Conservation X Labs’ Sentinel Smart Camera. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top