On Tuesday, December 13 at 9 am PT, Deci will deliver the free webinar “How to Successfully Deploy Deep Learning Models on Edge Devices” in partnership with the Edge AI and Vision Alliance. The introduction of powerful processors, memories and other devices, along with robust connectivity to the cloud, has enabled a new era of advanced AI applications capable of running on the edge. But system resources remain finite; cost, size and weight, power consumption and heat, and other constraints make it challenging to deploy accurate, resource-efficient deep learning models at the edge. How can you build a deep learning model that is not too complex or large to run on an edge device and makes the most of the available hardware?
This technical session is packed with practical tips and tricks on topics ranging from model selection and training tools to running successful inference at the edge. Yonatan Geifman, the company’s Co-founder and CEO, will demonstrate how to benchmark and compare different models, leverage model training best practices, and easily automate compilation and quantization processes, all while using the latest open source libraries and tools. At the conclusion of the webinar, you will have gained practical knowledge on how to eliminate guesswork in significantly improving your edge devices’ performance, boosting runtime speeds while optimizing accuracy for AI-based applications. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.
Editor-In-Chief, Edge AI and Vision Alliance
MAXIMIZING SOURCE IMAGE QUALITY
12+ Image Quality Attributes that Impact Computer Vision
In this presentation, Max Henkart, Optics Consultant and Owner of Commonlands LLC, introduces key image quality metrics. He discusses how they impact the performance of geometric and CNN-based methods, along with providing key performance indicators (KPIs) and real-world examples to answer the following questions:
Exposure: How does improper exposure impact your computer vision algorithms?
Dynamic range: What is the difference between dynamic range and exposure? What’s the difference between HDR vs. WDR?
Motion Blur: When does my camera hardware create motion artifacts?
Resolution and Texture: How do the four types of resolution differ and degrade performance?
Color and Shading: When does color influence computer vision performance?
Noise: What are the different types of noise in my system and how do they impact performance?
Image Artifacts: How do stray light, blemishes and fringing lead to edge cases?
Optimizing Camera Image Quality to Maximize Computer Vision Results
Applications of computer vision have broadly expanded thanks to deep learning, which achieves much better results than classical techniques. This is evident in our cell phone apps; video security, IoT and smart city solutions; and in cars and autonomous vehicles. Safety critical applications especially need robust accuracy. Unfortunately, as seen in recent AAA reports, widely reported failures of Tesla automatic braking, and reports from system developers, there are significant, disheartening gaps in the effectiveness of the latest systems when deployed in diverse real-world conditions. In this talk, Dave Tokic, Vice President of Marketing and Strategic Partnerships at Algolux, presents proven breakthrough approaches that address the limitations of current camera design and ISP tuning methodologies and result in significantly improved computer vision performance. He illustrates the effectiveness of these techniques with examples from real-world road scenarios using current automotive vision system architectures, and he introduces new vision system architectures that provide even more robust detection and depth perception.
SOFTWARE TOOLSET DEVELOPMENTS
Tools for Creating Next-gen Computer Vision Applications
Qualcomm’s Snapdragon Mobile Platform powers leading smartphones, XR headsets, PCs, wearables, cars and IoT products. Thanks to Snapdragon, these products feature powerful computer vision technologies that you can tap into to build next-gen apps. Inside Snapdragon is a hardware engine dedicated to computer vision–the Engine for Visual Analytics (EVA). EVA hardware acceleration gives developers access to high-performance, low-power computer vision functions to enhance apps that rely on advanced camera or video processing. The EVA includes a motion processing unit, a feature descriptor unit, a depth estimation unit, a geometric correction unit and an object detection unit. These blocks power high-level functions such as electronic image stabilization, multi-frame HDR, face detection and real-time bokeh. In this presentation, Judd Heape, Vice President of Product Management for Camera, Computer Vision and Video Technology at Qualcomm, does a deep dive into EVA’s Software Developer Kit (SDK) and available APIs, such as Optical Flow and Depth from Stereo, and explores how these features can be integrated into your applications.
Pairing Software and Hardware to Enable Edge Machine Learning
Machine learning is not new—the term was first coined in 1952. Its explosive growth over the past decade has not been the result of technical breakthroughs, but instead of available compute power. Similarly, its future potential will be determined by the amount of compute power that can be applied to an ML problem within the constraints of allowable power, area and cost. The key to increasing computation power is properly pairing hardware and software to effectively exploit parallelism. The Flex Logix InferX X1 accelerator is a system designed to fully utilize parallelism by teaming software with parallel hardware that is capable of being reconfigured based on the specific algorithm requirements. In this talk, Randy Allen, Vice President of Software at Flex Logix, explores the hardware architecture of the InferX X1, the associated programming tools, and how the two work together to form a cost-effective and power-efficient machine learning system.
EDGE AI AND
VISION PRODUCT OF THE YEAR WINNER SHOWCASE
Luxonis OAK-D-Lite (Best Camera or Sensor)
Luxonis’ OAK-D-Lite is the 2022 Edge AI and Vision Product of the Year Award winner in the Cameras and Sensors category. OAK-D-Lite is Luxonis’ next-generation spatial AI camera. It can run AI and CV on-device and fuse these results with stereo disparity depth perception to provide spatial coordinates of detected objects or features it detects. OAK-D-Lite combines the power of the Intel Myriad X Visual Processing Unit with a 4K (13 Mpixel) color camera and 480P stereo depth cameras, and can produce 300k depth points at up to 200 FPS. It has an USB-C connector for power delivery and communication with the host computer, and its 4.5 W max power consumption is ideal for low-power applications. It has a baseline distance of 7.5 cm so it can perceive depth from 20 cm up to 15 m. OAK-D-Lite is an entry-level device designed to be accessible to anyone, from corporations to students. Its tiny form factor can fit just about anywhere, including in your pocket, and it comes with a sleek front gorilla-glass cover. OAK-D-Lite is offered at MSRP of $149.
Please see here for more information on Luxonis’ OAK-D-Lite. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.
Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.