fbpx

Edge AI and Vision Insights: September 13, 2023 Edition

LETTER FROM THE EDITOR
Dear Colleague,2023 Computer Vision and Perceptual AI Developer Survey

You probably won’t be surprised that 84% of vision-based product developers are using DNNs. But did you know that 80+% are using non-neural network vision or imaging algorithms? Those are just two findings from last year’s Computer Vision and Perceptual AI Developer Survey.

The Edge AI and Vision Alliance is again conducting our annual survey to see what developers think about processors, tools and algorithms for computer vision and perceptual AI. I’d appreciate you taking a few minutes to participate. When we’re done, you’ll have the opportunity to see more results like these! Click here to take the survey now.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

ENHANCING IMAGE CAPTURE

Optimized Image Processing for Automotive Image Sensors with Novel Color Filter ArraysNextchip
Traditionally, image sensors have been optimized to produce images that look natural to humans. For images consumed by algorithms, what matters is capturing the most information. We can achieve this via higher resolution, but higher resolution means lower sensitivity. To increase resolution and maintain high sensitivity, color information can be sacrificed—but in automotive applications, color is critical. In response, suppliers offer image sensors that capture color information using novel color filter arrays (CFAs). Instead of the traditional RGGB array, these sensors use patterns like red-clear-clear-green (RCCG). These approaches yield good results for perception algorithms, but what about cases where images are used by both algorithms and humans? Can we reconstruct a natural-looking image from an image sensor using a non-standard CFA? In this talk, Young-Jun Yoo, Vice President of the Automotive Business and Operations Unit at Nextchip, explores novel CFAs and introduces Nextchip’s vision processor, which supports reconstruction of natural-looking images from image sensors with novel CFAs, including RGB-IR sensors.

Can AI Solve the Low Light and HDR Challenge?Visionary.ai
The phrase “garbage in, garbage out” is applicable to machine and human vision. If we can improve the quality of image data at the source by removing noise, this will have far-reaching impacts, such as improved accuracy later in the pipeline, particularly in the challenging conditions of low light or high dynamic range. In this talk, Oren Debbi, CEO and Co-founder of Visionary.ai, presents his company’s new, AI-based approach for removing image noise and preserving image quality in real time, and shares the latest results demonstrating how it performs with commonly used image sensors. He also shows that Visionary.ai is able to achieve these results with very low compute resource requirements, making its approach suitable for battery-powered devices at the edge.

ACCELERATING VISION DEVELOPMENT

Building Large-scale Distributed Computer Vision Solutions Without Starting from ScratchNetwork Optix
Video is hard. Network Optix makes it really easy. Video has the potential to become a valuable source of operational data for business, especially with the help of AI. In this talk, Darren Odom, Director of Platform Business Development at Network Optix, shows how to practically build a video-enabled cloud or hybrid edge/cloud SaaS product for multi-site, globally distributed enterprises. Nx allows you to focus on doing what you do best, and provides all of the cloud-enabled video tools you need to get your vision to scale. Odom focuses on how cloud software, AI and edge hardware companies can build solutions and quickly get to market with the Nx Platform, making use of its open-source clients, rich examples, cloud rules engine, metadata interface, SDKs and APIs—without reinventing the wheel.

How to Select, Train, Optimize and Deploy Edge Vision AI Models in Three DaysNota AI
NetsPresso is a development pipeline that enables developers to build, optimize and deploy vision AI models faster and better. Using conventional tools, it typically takes 6-12 weeks to select, train, optimize and deploy a vision DNN. As explained by Steven Kim, Co-CEO of Nota America, in this presentation, using NetsPresso, developers can create and deploy high-performance models at the edge in three days. NetsPresso uses neural architecture search to quickly find the best model for your application and hardware, and then trains the model in a hardware-aware manner to optimize accuracy and latency for your processor. Then NetsPresso applies model compression and acceleration to make your model small and fast without sacrificing accuracy. Finally, NetsPresso generates executable code and packages it in a form that can easily be integrated into your application. While developers focus on optimizing model accuracy and latency, NetsPresso minimizes the time and money required to build these models. So far, AI models developed through NetsPresso are commercially deployed on more than 45,000 devices.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Outsight SHIFT LiDAR Software (Best Edge AI Software or Algorithm)Outsight
Outsight’s SHIFT LiDAR Software is the 2023 Edge AI and Vision Product of the Year Award winner in the Edge AI Software and Algorithms category. The SHIFT LiDAR Software is a real-time 3D LiDAR pre-processor that enables application developers and integrators to easily utilize LiDAR data from any supplier and for any use case outside of automotive (e.g. smart infrastructure, robotics, and industrial). Outsight’s SHIFT LiDAR Software is the industry’s first 3D data pre-processor, performing all essential features required to integrate any LiDAR into any project (SLAM, object detection and tracking, segmentation and classification, etc.). One of the software’s greatest advantages is that it produces an “explainable” real-time stream of data low-level enough to either directly feed ML algorithms or be fused with other sensors, and smart enough to decrease network and central processing requirements, thereby enabling a new range of LiDAR applications. Outsight believes that accelerating the adoption of LiDAR technology with easy-to-use and scalable software will meaningfully contribute to creating transformative solutions and products to make a smarter and safer world.

Please see here for more information on Outsight’s SHIFT LiDAR Software. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top