The next Embedded Vision Summit will take place as a live event May 17-19, 2022 in Santa Clara, California. The Embedded Vision Summit is the key event for system and application developers who are incorporating computer vision and visual AI into products. It attracts a unique audience of over 1,400 product creators, entrepreneurs and business decision-makers who are creating and using computer vision and visual AI technologies. It’s a unique venue for learning, sharing insights and getting the word out about interesting new technologies, techniques, applications, products and practical breakthroughs in computer vision and visual AI.
We’re delighted to return to being in-person and hope you’ll join us. Once again we’ll be offering a packed program with 100+ sessions, 50+ technology exhibits, 100+ demos and a new Edge AI Deep Dive Day, all covering the technical and business aspects of practical computer vision, deep learning, visual AI and related technologies. Registration is now open, and if you register by December 31, you can save 35% by using the code SUMMIT22NL-35. Register now, save the date and tell a friend! You won’t want to miss what is shaping up to be our best Summit yet!
The Alliance will be taking a winter holiday break from newsletter publication next week. Until next time, on behalf of the Alliance, I wish you joy, health, and happiness for the holiday season, however you celebrate it, and for the New Year. Happy Holidays!
Editor-In-Chief, Edge AI and Vision Alliance
OPTIMIZATIONS FOR CONSTRAINED RESOURCES
TinyML Isn’t Thinking Big Enough
Today, says Perceive CEO Steve Teig in this presentation, TinyML focuses primarily on shoehorning neural networks onto microcontrollers or small CPUs but misses the opportunity to transform all of ML because of two unfortunate assumptions: first, that tiny models must make significant performance and accuracy compromises to fit inside edge devices, and second, that tiny models should run on CPUs or microcontrollers. Regarding the first assumption, information-theoretic considerations would suggest that principled compression (vs., say, just replacing 32-bit weights with 8-bit weights) should make models more accurate, not less. For the second assumption, CPUs are saddled with an intrinsically power-inefficient memory model and mostly serial computation, but the evident parallelism of neural networks naturally leads to high-performance, power-efficient, massively parallel inference hardware. By upending these assumptions, TinyML can revolutionize all of ML–and not just inside microcontrollers.
Super Resolution on Resource-constrained Devices
Internet video streaming has recently experienced tremendous growth, but delivery quality remains critically dependent on network bandwidth. To mitigate bandwidth limitations, most video is compressed, resulting in image artifacts, noise, and blur. Quality is also degraded by image upscaling, which is required to match the very high pixel density of mobile devices. Scientists have developed many upscaling techniques, such as Lanczos resampling, but for over 20 years, no fundamentally new methods were introduced. This situation is changing now thanks to a new class of techniques known as deep learning super-resolution (DLSR). Despite their excellent performance, DLSR methods cannot be easily applied to real-world applications due to their heavy computational requirements. In this talk, Marcus Edel, Machine Learning Engineer, and Aaron Boxer, Senior Software Developer, both of Collabora, present their accurate and lightweight network for video super-resolution.
VISUAL AI HEALTHCARE OPPORTUNITIES
Improving Nursing Care with Privacy-Sensitive Edge Computer Vision
Around the world, there is a serious and growing shortage of nurses. Nursing care at night is a particular challenge because night shifts are less attractive to nurses and since patients’ needed rest can be disturbed by in-person monitoring. Computer-vision-based activity detection provides the ability to reliably monitor patients and alert nurses when assistance is needed. But creating and deploying a solution requires overcoming several significant obstacles. For example, a single overhead camera with a fisheye lens capable of viewing an entire room delivers very distorted images. In addition, privacy is a critical concern. And care facilities often have minimal IT infrastructure and staff. In this talk, Dr. Harro Stokman, Chief Executive Officer and Founder of Kepler Vision Technologies, explains how his company’s Kepler Night Nurse product has overcome these challenges and achieved registration as a medical device.
Streamlining Development of Edge AI Solutions: A Healthcare Use Case
The need for IoT edge applications has never been greater. But complex requirements and unique connectivity, security, and latency challenges impede progress and extend time to market for software developers. In addition, many solution developers face the challenge of scaling their solutions when edge AI infrastructure differs from customer to customer. In this tutorial, Vaghesh Patel, Senior Software Developer, Adam Bishop, Software Product Manager, and Chen Su, Product Marketing Engineer, all of Intel, explain how the Intel Edge Software Hub helps accelerate the development of edge computing solutions and lowers barriers to creating reliable, scalable applications. This one-stop resource makes it easy for developers to quickly find, prototype, and integrate the edge computing software they need, including use-case-specific customizable reference software implementations. The presenters use an example telepathology application to illustrate how to leverage Edge Software Hub to overcome complex networking challenges and optimize AI model deployment and management (in this case within a hospital system), making use of Intel technologies including OpenNESS and OpenVINO Model Server.
EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE
Simbe Robotics Tally 3.0 (Best Enterprise Edge AI End Product)
Simbe Robotics’ Tally 3.0 is the 2021 Edge AI and Vision Product of the Year Award Winner in the Enterprise Edge AI End Products category. Tally is the only robot on the market that combines computer vision, machine learning, and RFID technologies to audit store shelves across a range of retail environments. Tally detects misplaced, mispriced, and out-of-stock items, arming retailers with stronger insights into shelf availability, and ensuring that items are more quickly restocked and corrected, improving the customer experience. Tally 3.0 combines both edge and cloud computing, enabling it to transfer some of its AI and machine learning workloads to the edge. This hybrid system better optimizes data collection and processing, getting insights to store teams more quickly. By operating both on the edge and in the cloud, Tally can more quickly use deep learning to help with tasks like autofocus and barcode decoding, ensuring stores have the most up-to-date data.
Please see here for more information on Simbe Robotics’ Tally 3.0. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.
Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.