Combining CNNs and Conventional Algorithms for Low-Compute Vision: A Case Study in the Garage
Chamberlain Group (CGI) is a global leader in access control solutions with its Chamberlain and LiftMaster garage door opener brands and myQ connected technology. In this presentation from Nathan Kopp, the company’s Principal Software Architect for Video Systems, you’ll learn how CGI is innovating to bring efficient, affordable computer vision into the garage, opening new possibilities and insights for homeowners and businesses. With constant improvements in neural network architectures and advancements in low-power edge processors, it is tempting to assume that convolutional neural networks (CNNs) will solve every vision problem. However, simpler “conventional” computer vision techniques continue to offer an attractive cost-to-performance ratio and require orders of magnitude less training data. Unfortunately, these algorithms often need hand-tuning of parameters, and do not generalize well to previously unseen environments. By combining CNNs with simpler algorithms into a layered, intelligent vision pipeline—and by understanding the constraints of the problem—the weaknesses of simpler algorithms can be offset by the strengths of CNNs, while still preserving their cost-saving benefits.
Feeding the World Through Embedded Vision
Although it’s not widely known outside of the industry, computer vision is beginning to be used at scale in agriculture, where it is delivering meaningful improvements in efficiency and quality, with the potential for tremendous impact on how our food is grown. In this presentation, Travis Davis, Delivery Manager for the Automation Delivery team with the Intelligent Solutions Group at John Deere, introduces deployed agricultural computer vision solutions for harvesting and spraying. He explores key technical challenges that John Deere had to overcome to create these solutions, and highlights the ways in which agricultural vision applications often have requirements that are quite different from those of automotive and commercial applications.
A Practical Guide to Implementing Deep Neural Network Inferencing at the Edge
In this presentation, Toly Kotlarsky, Distinguished Member of the Technical Staff in R&D at Zebra Technologies, explores practical aspects of implementing a pre-trained deep neural network (DNN) for inference on typical edge processors. First, he briefly touches on how to evaluate the accuracy of DNNs for use in real-world applications. Next, he explains the process for converting a trained model in TensorFlow into formats suitable for deployment at the edge and examines a simple, generic C++ real-time inference application that can be deployed on a variety of hardware platforms. Kotlarsky then outlines a method for evaluating the performance of edge DNN implementations and shows the results of utilizing this method to benchmark the performance of three popular edge computing platforms: the Google Coral (based on the Edge TPU), NVIDIA’s Jetson Nano and the Raspberry Pi 3.
An Introduction to Simultaneous Localization and Mapping (SLAM)
This talk from Gareth Cross, former Technical Lead for State Estimation at Skydio, provides an introduction to the fundamentals of simultaneous localization and mapping (SLAM). Cross provides foundational knowledge; viewers are not expected to have any prerequisite experience in the field. The talk consists of an introduction to the concept of SLAM, as well as practical design considerations in formulating SLAM problems. Visual inertial odometry is introduced as a motivating example of SLAM, and Cross reviews how the problem is structured and solved.
EyeTech Digital Systems EyeOn (Best Consumer Edge AI End Product)
EyeTech Digital Systems’ EyeOn is the 2021 Edge AI and Vision Product of the Year Award Winner in the Consumer Edge AI End Products category. EyeOn combines next-generation eye-tracking technology with the power of a portable, lightweight tablet, making it the fastest, most accurate device for augmentative and alternative communication. With hands-free screen control through built-in predictive eye-tracking, EyeOn gives a voice to impaired and non-verbal patients with conditions such as cerebral palsy, autism, ALS, muscular dystrophy, stroke, traumatic brain injuries, spinal cord injuries, and Rett syndrome. EyeOn empowers users to communicate, control their environments, search the web, work, and learn independently – all hands-free, using the power of their eyes.
Please see here for more information on EyeTech Digital Systems’ EyeOn. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.