fbpx

Edge AI and Vision Insights: October 25, 2023 Edition

LETTER FROM THE EDITOR
Dear Colleague,AI Innovation Awards

I’m excited to announce the inaugural AI Innovation Awards, brought to you by the Edge AI and Vision Alliance. The awards celebrate groundbreaking end products powered by edge AI and vision technologies. If you know of an end product introduced in the last year that fits the bill, nominate it for a chance to gain industry recognition. It’s easy and free! Find out more here!


The 2024 Embedded Vision Summit Call for Presentation Proposals is open! I invite you to share your expertise. Our team is curating what will be more than 100 expert sessions and we’d love to see your proposal. From case studies on integrating perceptual AI into products to tutorials on the latest tools and algorithms, send in your session idea today. And if you’re not sure about your topic, check out the topics list to see what’s trending for 2024.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

NEURAL NETWORK ADVANCEMENTS

How Transformers Are Changing the Nature of Deep Learning ModelsSynopsys
The neural network models used in embedded real-time applications are evolving quickly. Transformer networks are a deep learning approach that has become dominant for natural language processing and other time-dependent, series data applications. Now, transformer-based deep learning network architectures are also being applied to vision applications with state-of-the-art results compared to CNN-based solutions. In this presentation, Tom Michiels, System Architect for ARC Processors at Synopsys, introduces transformers and contrasts them with the CNNs commonly used for vision tasks today. He examines the key features of transformer model architectures and shows performance comparisons between transformers and CNNs. Michaels concludes with insights on why his company thinks transformers will become increasingly important for visual perception tasks.

Making Generative Adversarial Networks BetterPerceive
Generative adversarial networks, or GANs, are widely used to create amazing “fake” images and realistic, synthetic training data. And yet, despite their name, mainstream GANs generate only the examples that are easiest to find, rather than the promised adversaries, which would be more diverse, more challenging and more useful. Careful re-examination and rethinking of the strategy underlying GANs, as explained by Steve Teig, CEO of Perceive, in this talk, leads to a novel refinement of GANs, which offers much better enrichment of datasets and a wider variety of generated images.

IMAGE ENHANCEMENT TECHNIQUES

Optimized Image Processing for Image Sensors with Novel Color Filter ArraysNextchip
Traditionally, image sensors have been optimized to produce images that look natural to humans. For images consumed by algorithms, what matters is capturing the most information. We can achieve this via higher resolution, but higher resolution means lower sensitivity. To increase resolution and maintain high sensitivity, color information can be sacrificed—but in automotive applications, color is critical. In response, suppliers offer image sensors that capture color information using novel color filter arrays (CFAs). Instead of the traditional RGGB array, these sensors use patterns like red-clear-clear-green (RCCG). These approaches yield good results for perception algorithms, but what about cases where images are used by both algorithms and humans? Can we reconstruct a natural-looking image from an image sensor using a non-standard CFA? In this talk, Young-Jun Yoo, Vice President of the Automotive Business and Operations Unit at Nextchip, explores novel CFAs and introduces Nextchip’s vision processor, which supports reconstruction of natural-looking images from image sensors with novel CFAs, including RGB-IR sensors.

Adding Real-time AI Functionality to Image Signal Processing with Reduced Memory Footprint and Processing LatencyVeriSilicon
The AI-ISP IP product from VeriSilicon is a revolutionary solution that adds AI functionality to image signal processing (ISP) while reducing the memory footprint and processing latency. Unlike conventional approaches that require the entire image frame to be stored in memory before processing, the AI-ISP enables real-time AI-based processing of image data with reduced memory usage and processing latency. This approach results in improved image quality and reduced system overhead, making it ideal for a variety of applications, including surveillance, automotive and industrial imaging. In this presentation, Mankit Lo, Chief Architect for NPU IP Development at VeriSilicon, provides an overview of the AI-ISP product and its key features, and discusses its benefits and potential use cases. He also shares his company’s experience of integrating AI-ISP into its customers’ products and highlights the results obtained from real-world applications.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events

FEATURED NEWS

e-con Systems’ New See3CAM_CU512M Monochrome USB and See3CAM_50CUG Color 5 Mpixel Cameras Target Medical and Life Science Applications

AMD Accelerates Innovation at the Edge with the Kria K24 SOM and Starter Kit for Industrial and Commercial Applications and is Acquiring Open-source AI Software Company Nod.ai

BrainChip is Making its Second-generation Akida Platform Available to Advance State-of-the-Art Edge AI Solutions and Partnering with VVDN Technologies On a Commercial Edge Box Based on Neuromorphic Technology

Axelera AI Has Begun Sampling Its Metis Edge AI Processing Platform

Allegro DVT’s Newly-released Full Range of LCEVC Products Foster Adoption of the MPEG-5 LCEVC Video Codec

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Piera Systems Canāree Air Quality Monitor (Best Enterprise Edge AI End Product)Piera Systems
Piera Systems’ Canāree Air Quality Monitor is the 2023 Edge AI and Vision Product of the Year Award winner in the Enterprise Edge AI End Products category. The Canāree family of air quality monitors (AQMs) are compact, highly accurate, and easy to use. The Canāree AQMs are also the most innovative, cost-effective air quality monitors owing largely to the quality of the data they produce which in turn helps classify, and in some cases identify, specific pollutants. Identifying when someone is vaping in a school bathroom or a hotel room is a good example of this technology in action. Classification of pollutants is done by employing AI/ML techniques on the highly accurate data produced by the Canāree AQMs. This is the only low-cost AQM in the world with such a capability. The Canāree AQMs measure various environmental factors including particles, temperature, pressure, humidity, and VOCs. While many similar products exist in the market, Canāree is the only one with a highly accurate particle sensor which uniquely sets it apart. Canāree AQMs measure particles ranging from 10 microns in size all the way down to 100 nanometers, a unique capability in this industry. This particle data is distributed into seven size “bins” and these data bins are the foundation for its classification capabilities.

Please see here for more information on Piera Systems’ Canāree Air Quality Monitor. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top