Edge AI and Vision Insights: February 7, 2024 Edition

LETTER FROM THE EDITOR
Dear Colleague,NVIDIA webinar

On Tuesday, March 5, 2024 at 8 am PT, NVIDIA will deliver the free webinar “Accelerate Edge AI Development With NVIDIA Metropolis Microservices For Jetson” in partnership with the Edge AI and Vision Alliance. Building vision AI applications for the edge often comes with long and costly development cycles. At the same time, quickly developing edge AI applications that are cloud-native, flexible, and secure has never been more important. Now, a powerful yet simple API-driven edge AI development workflow is available with the new NVIDIA Metropolis microservices. NVIDIA Metropolis microservices is a suite of customizable building blocks for developing vision AI applications and solutions.

The latest release introduces an expanded set of APIs and microservices on the NVIDIA Jetson platform to further accelerate the development and deployment of vision AI applications at the edge. These new Jetson microservices empower developers to modernize their AI application stack, streamline processes, and safeguard applications for the future. Now you can easily incorporate the latest in generative AI advancements through APIs and microservices such as video storage and management, prebuilt AI perception pipelines, tracking algorithms, system monitoring, IoT services for secure edge-to-cloud connectivity, and more. In this webinar, presented by NVIDIA senior product manager Chintan Shah, you’ll:

  • Hear what’s included in the collection and how it will fast-track your vision AI development,
  • Learn how to connect the microservices to build your vision AI applications,
  • Understand how to customize your applications with your own microservices,
  • Learn to securely connect and manage your Jetson applications from the cloud, and
  • Experience example applications and the workflows behind them.

A question-and-answer session will follow the presentation. For more information and to register, please see the event page. Also see NVIDIA’s recent announcement of its Metropolis Microservices for Jetson, along with an accompanying technical blog post.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEPTH SENSING DESIGN AND TRENDS

Optimizing Image Quality and Stereo Depth at the EdgeJohn Deere
John Deere uses machine learning and computer vision (including stereo vision) for challenging outdoor applications such as obstacle detection, vision-based guidance and weed management, among many others. The quality of the images the company’s systems obtain, and the accuracy of the depth information produced by its stereo cameras, significantly impact the performance of the overall solutions. In this talk, Travis Davis, Delivery Manager in the Automation and Autonomy Core, and Tarik Loukili, Technical Lead for Automation and Autonomy Applications, both of John Deere, share some of the challenges John Deere has faced in developing image quality improvement and stereo vision algorithms. Many of the techniques found in academic research and prior work cannot be easily implemented in real-time applications at the edge and are difficult to scale for applications with varying performance and cost requirements. They highlight some of the alternative techniques their company has developed to provide optimized image-quality and stereo vision implementations that meet the requirements of John Deere’s product range. Stated another way, they share how they do more with less.

3D Sensing: Market and Industry UpdateYole Group
While the adoption of mobile 3D sensing has slowed in Android phones, the market has still been growing fast, thanks to Apple. Apple is continuing to adopt 3D cameras for iPhones in both front and rear. Along the way, Apple has updated face ID and simplified and shrunk 3D camera optical structures. Meanwhile, due to Android phone OEMs mostly choosing not to incorporate 3D cameras, sensor suppliers and integrators have had to work hard to open up other consumer markets. In addition to consumer markets, the use of 3D sensing has been blossoming in markets such as the industrial market and the nascent automotive market, where 3D sensing is increasingly used for advanced driver assistance systems and driver monitoring systems. In this talk, Florian Domengie, Senior Technology and Market Analyst at Yole Intelligence (part of the Yole Group), provides an overview of the main application, market, industry and technology trends of the 3D sensing industry.

OBJECT TRACKING TECHNIQUES

Using a Collaborative Network of Distributed Cameras for Object TrackingInvision AI
Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling the number of cameras, the solution must avoid sending all images across a network. When camera views overlap only a little or not at all, input from multiple cameras must be combined by extending tracking area coverage. When an object can be seen by multiple cameras, this should be used to increase accuracy. Multiple cameras can better collaborate if they share a common coordinate system; therefore, environment mapping and accurate calibration is necessary. Moreover, the tracking algorithm must scale properly with the number of tracked objects, which can be achieved with a distributed approach. In this talk, Samuel Örn, Team Lead and Senior Machine Learning and Computer Vision Engineer at Invision AI, covers practical ways of addressing these issues, presents his company’s multiple-camera tracking solution used for vehicle and pedestrian tracking, and shares some of its results.

Multiple Object Tracking SystemsTryolabs
Multiple object tracking (MOT) is an essential capability in many computer vision systems, including applications in fields such as traffic control, self-driving vehicles, sports and more. In this session, Javier Berneche, Senior Machine Learning Engineer at Tryolabs, walks through the construction of a typical MOT algorithm step by step. At each step, he identifies key challenges and explores design choices (for example, detection-based vs. detection-free approaches and online vs. offline tracking). Berneche discusses available off-the-shelf MOT algorithms and open-source libraries. He also identifies areas where current MOT algorithms fall short. And he introduces metrics and benchmarks commonly used to evaluate MOT solutions.

UPCOMING INDUSTRY EVENTS

Accelerate Edge AI Development With NVIDIA Metropolis Microservices For Jetson – NVIDIA Webinar: March 5, 2024, 8:00 am PT

Build vs Buy: Navigating Optical Image Sensor Module Complexities – FRAMOS Webinar: March 7, 2024, 9:00 am PT

Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events

FEATURED NEWS

e-con Systems Launches a 20 Mpixel Multi-camera for NVIDIA Jetson Orin

Vision Components Introduces Its First MIPI Camera with Sony’s IMX900 Image Sensor

Intel Drives ‘AI Everywhere’ into the Automotive Market with Acquisition, Product Plans

VeriSilicon Unveils New VC9800 Video Processor Unit (VPU) IP

Ambarella Introduces the N1 System-on-Chip Series for On-premise AI Applications

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Outsight SHIFT LiDAR Software (Best Edge AI Software or Algorithm)Outsight
Outsight’s SHIFT LiDAR Software is the 2023 Edge AI and Vision Product of the Year Award winner in the Edge AI Software and Algorithms category. The SHIFT LiDAR Software is a real-time 3D LiDAR pre-processor that enables application developers and integrators to easily utilize LiDAR data from any supplier and for any use case outside of automotive (e.g. smart infrastructure, robotics, and industrial). Outsight’s SHIFT LiDAR Software is the industry’s first 3D data pre-processor, performing all essential features required to integrate any LiDAR into any project (SLAM, object detection and tracking, segmentation and classification, etc.). One of the software’s greatest advantages is that it produces an “explainable” real-time stream of data low-level enough to either directly feed ML algorithms or be fused with other sensors, and smart enough to decrease network and central processing requirements, thereby enabling a new range of LiDAR applications. Outsight believes that accelerating the adoption of LiDAR technology with easy-to-use and scalable software will meaningfully contribute to creating transformative solutions and products to make a smarter and safer world.

Please see here for more information on Outsight’s SHIFT LiDAR Software. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top