You spoke and we listened: the 2021 Embedded Vision Summit, the premier conference for innovators adding computer vision and visual AI to products, is now a four-day event, taking place online May 25-28! Expanding the program enables us to offer 80+ highly relevant, top-quality sessions, the type of content that’s been earning the Summit 96%+ approval ratings from attendees for 10 years. Other recent program enhancements include:
Offering both live online and on-demand-only sessions during the event, opening up lots of flexibility to fit your schedule, and
Scaling up demos and including them as part of the main agenda, so you don’t have to choose between seeing live sessions and demos
Editor-In-Chief, Edge AI and Vision Alliance
DEEP LEARNING MEDICAL OPPORTUNITIES
AI-based Face Mask Detection and Analytics
In this video, BDTI and its partners, Tryolabs S.A. and Jabil Optics, demonstrate MaskCam, an open-source smart camera prototype reference design based on the NVIDIA Jetson Nano, capable of estimating the number and percentage of people wearing face masks in its field of view. MaskCam was developed as part of an independent, hands-on evaluation of the Jetson Nano for building real-world edge AI/vision applications. You can read the detailed report at https://bdti.com/maskcam and MaskCam’s source code is available under the MIT License at https://github.com/bdtinc/maskcam. If you have a Jetson Nano Developer Kit and a USB web camera, you can get the MaskCam software running on your system with two simple commands described in the README.
Enabling Embedded AI for Healthcare
Wearable electronics have started to become part of our daily lives, in the form of watches, wristbands, fitness trackers and the like. Advances in sensor design and in AI processing have made these little helpers more capable. The next level could elevate these devices from casual consumer conveniences to true medical monitoring and support, and could include devices that we not only wear, but also devices that we implant them in our bodies. This presentation from Shang-Hung Lin, Vice President of Machine Learning and Neural Processor Product Development at VeriSilicon, discusses the challenges that must be overcome to get to this next stage, and considers approaches and technical solutions, drawing on VeriSilicon’s experience designing chips for a wide range of cost- and power-constrained applications.
DEVELOPMENT AND DEPLOYMENT TOOLSETS
Deploying Deep Learning Applications on FPGAs with MATLAB
Designing deep learning networks for embedded devices is challenging because of processing and memory resource constraints. FPGAs present an even greater challenge due to the complexity of programming in Verilog or VHDL, and the hardware expertise needed for prototyping on an FPGA. This talk from Jack Erickson, Principal Product Marketing Manager at MathWorks, illustrates a workflow to facilitate the design and deployment of these applications to FPGAs using pre-built bitstreams without the need for much hardware expertise. Starting with a pre-trained model trained either in MATLAB or any framework of your choice, Erickson demonstrates the workflow to prototype and deploy the trained network from MATLAB to an FPGA. He illustrates this flow using a deep learning network for image recognition, deploying it to the Xilinx MPSoC board for inference using APIs from MATLAB. This demonstrates how deep learning algorithm engineers can quickly explore different networks and their performance on an FPGA from MATLAB.
Parallelizing Machine Learning Applications with Kubernetes
In this talk, Rajy Rawther, PMTS Software Architect in the Machine Learning Software Engineering group at AMD, presents techniques for obtaining the best inference performance when deploying machine learning applications. With the increasing use of AI in applications ranging from image classification/object detection to natural language processing, it is vital to deploy AI applications in ways that are scalable and efficient. Much work has focused on how to distribute DNN training for parallel execution using machine learning frameworks (TensorFlow, MXNet, PyTorch and others). There has been less work on scaling and deploying trained models on multi-processor systems. Rawther presents a case study analysis of scaling an image classification application using multiple Kubernetes pods. She explores the factors and bottlenecks affecting performance and examine techniques for building a scalable application pipeline.
Morpho Semantic Filtering (Best AI Software or Algorithm)
Morpho’s Semantic Filtering is the 2020 Vision Product of the Year Award Winner in the AI Software and Algorithms category. Semantic Filtering improves camera image quality by combining the best of AI-based segmentation and pixel processing filters. In conventional imaging, computational photography algorithms are typically applied to the entire image, which can sometimes cause unwanted side effects such as loss of detail and textures, as well as in the appearance of noise in certain areas. Morpho’s Semantic Filtering is trained to identify the meaning of each pixel in the object of interest, allowing the application of the right algorithm for each category, with different strength levels that are most effective to achieve the best image quality for still-image capture.
Please see here for more information on Morpho and its Semantic Filtering. The Edge AI and Vision Product of the Year Awards (an expansion of previous years’ Vision Product of the Year Awards) celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes your leadership in edge AI and computer vision as evaluated by independent industry experts. The Edge AI and Vision Alliance is now accepting applications for the 2021 Awards competition. The submission deadline has been extended to this Friday, March 26; for more information and to enter, please see the program page.
EMBEDDED VISION SUMMIT MEDIA PARTNER SHOWCASE
Everyone wants safety on the road. Can advancements in sensing and decision-making technologies help drivers, passengers and vulnerable road users? Advanced driver-assistance systems (ADAS) and autonomous vehicles (AV) are still works in progress that rely on constantly evolving technologies. The newly published 152-page book “Sensors in Automotive“, with contributions from leading thinkers of the automotive industry, marks and heralds the industry’s progress, identifies the remaining challenges, and examines with an unbiased eye what it will take to overcome them.
Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.