|DEEP LEARNING FOUNDATIONS AND OPTIMIZATIONS
Fundamentals of Training AI Models for Computer Vision and Video Analytics Applications
AI has become an important component of computer vision and video analytics applications. But creating AI-based solutions is a challenging process. To build a successful product, it is essential that training a deep neural network results in a model which is highly accurate, is robust to false positives and has high throughput. While approaching a new computer vision and video analytics task, an AI engineer needs to make a number of design decisions. How do we formulate a deep learning problem? How much data is enough? How complex shall the model be for a particular task? How to set training parameters? In this two-part talk, Ekaterina Sirazitdinova, Data Scientist at NVIDIA, shares some best practices an AI developer can follow to answer these and other important questions when developing a new AI system in order to get meaningful results faster.
A Practical Guide to Getting the DNN Accuracy You Need and the Performance You Deserve
Every day, developers struggle to take DNN workloads that were originally developed on workstations and migrate them to run on edge devices. Whether the application is in mobile, compute, IoT, XR or automotive, most AI developers start their algorithm development in the cloud or on a workstation and later migrate to on-device as an afterthought. Qualcomm is helping these developers on multiple fronts—democratizing AI at the edge by supporting frameworks and data types that developers are most familiar with, and at the same time building a set of tools to assist sophisticated developers who are taking extra steps to extract the best performance and power efficiency. In this video, Felix Baum, Director of Product Management at Qualcomm, presents the workflow and steps for effectively migrating DNN workloads to the edge. He discusses quantization issues, explore how the accuracy of models affects performance and power and outline the Qualcomm tools that help developers successfully launch new use cases on mobile and other edge devices.
|ACCELERATING PRODUCT DEVELOPMENT
Introducing the Kria Robotics Starter Kit: Robotics and Machine Vision for Smart Factories
A robot is a system of systems with diverse sensors and embedded processing nodes focused on core capabilities such as motion, navigation, perception, machine vision, communication and control — alongside more unique and application-specific requirements. With the new Kria KR260 Robotics Starter Kit and the Kria Robotics Stack (KRS), users can easily build a complete robotics system using a ROS 2-based environment with low-latency, deterministic communications connecting production-ready Kria SOMs. The resultant adaptive system can readily implement evolving and diverse algorithms as well as scale across multiple projects. This presentation from Chetan Khona, Director of Industrial, Vision, Healthcare and Sciences Markets at AMD, highlights the capabilities and solutions possible with the Kria KR260 Robotics Starter Kit for roboticists, machine vision developers and industrial solution architects.
Jumpstart Your Edge AI Vision Application with New Development Kits
Choosing the right processing solution for your embedded vision application can make or break your next development effort. This presentation from Monica Houston, Technical Solutions Manager at Avnet, introduces three next-generation embedded vision platforms from Avnet that enable camera-based AI at the edge, featuring the latest edge AI technical advances in processors from NXP, Renesas and Xilinx. Houston discusses the strengths and distinctive features of each solution, highlighting the applications each solution is best optimized for. She also explores the new family of production-ready camera modules featured with these kits and provides guidance on selecting the appropriate camera features for your embedded application.
|UPCOMING INDUSTRY EVENTS
Putting Activations on a Diet – Or Why Watching Your Weights Is Not Enough – Perceive Webinar: November 10, 2022, 9:00 am PT
NVIDIA Jetson Orin Nano (with Vision Platform Compatibility from Basler) Sets New Standard for Entry-Level Edge AI and Robotics With 80x Performance Leap
Teledyne Announces Next-generation 5GigE Area Scan Camera Platform
IDS Adds Numerous New USB3 Cameras to Its Product Range
e-con Systems Launches GigE Low Light HDR Camera
Flex Logix Unveils First AI-Integrated Mini-ITX System to Simplify Edge and Embedded AI Deployment
|EDGE AI AND
VISION PRODUCT OF THE YEAR WINNER SHOWCASE
Sequitur Labs EmSPARK Security Suite 3.0 (Best Edge AI Software or Algorithm)
Sequitur Labs’ EmSPARK Security Suite 3.0 is the 2022 Edge AI and Vision Product of the Year Award winner in the Edge AI Software and Algorithms category. The EmSPARK Security Suite is a software solution that makes it easy for IoT and edge device vendors to develop, manufacture, and maintain secure and trustworthy products. By implementing the EmSPARK Security suite, enabled by industry-leading processors, device OEMs can: isolate and protect security credentials to prevent device compromise, protect critical IP, including device-resident software, prevent supply chain compromises with secure software provisioning and updates and accelerate time-to-market while reducing implementation cost and overall security risk. The EmSPARK Security Suite is the industry’s first solution to provide a suite of tools for protecting AI models at the Edge. With the release of EmSPARK 3.0, developers can safely deploy AI models on IoT devices, opening the door for a new era of edge computing.
Please see here for more information on Sequitur Labs’ EmSPARK Security Suite 3.0. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.