The Alliance is now accepting applications for the sixth annual Vision Tank start-up competition. Are you an early-stage start-up company developing a new product or service incorporating or enabling computer vision or visual AI? Do you want to raise awareness of your company and products with vision industry experts, investors and developers? The Vision Tank start-up competition offers early-stage companies a chance to present their new products to a panel of judges at the 2021 Embedded Vision Summit, in front of a live online audience.
Two awards are given out each year: the Judges’ Award and the Audience Choice Award. The winner of the Vision Tank Judges’ Award will receive a $5,000 cash prize, and both winners will receive a one-year membership in the Edge AI and Vision Alliance. All finalists will also get one-on-one advice from the judges, as well as valuable introductions to potential investors, customers, employees and suppliers. Applications are due by February 17; for more information, and to enter, please see the program page.
The 100% virtual 2021 Embedded Vision Summit, the premier conference for innovators adding computer vision and visual AI to products, is coming May 25-27—and we’re excited about the program that’s taking shape! At the Summit, you’ll be able to:
See an amazing range of technology in action—we’re talking dozens upon dozens of leading-edge building-block technologies as well as applications enabled with computer vision, edge AI, and sensor data
Watch expert sessions on the most pressing topics in the industry from some of the brightest minds currently working with edge AI and vision
Connect with those VEPs—you know, Very Elusive People—like that potential building-block technology supplier, critical ecosystem partner, or technical expert you’ve been looking for
Keep building your skills with hands-on learning, tutorials and more!
Share your expertise on practical computer vision and visual AI and be recognized as an authority on the subject by your peers
Increase your company’s visibility and reputation
Build your network and connect with new suppliers, customers and partners
We have extended the deadline for session proposals to February 19. Space is limited, so submit your proposal now before the agenda fills up. Visit the Summit website to learn more about the requirements, and to submit your proposal.
Editor-In-Chief, Edge AI and Vision Alliance
DEEP LEARNING INFERENCE AT THE EDGE
How 5G is Pushing Processing to the Edge
Worldwide 5G deployment has begun and promises ultra-high data rates with ultra-low latency, enabling real-time and interactive applications unlike anything previously supported by a cellular network. However, the 5G air-interface is itself only a small piece of the picture. Every device within the network, from a user device to a cloud application server, must play its own role in contributing to the overall performance of any 5G-based solution. This presentation from Dan Picker, Chief Technology Officer at Inseego, explores how 5G is accelerating the push for edge-based processing and how placing the right computing resources at the right points in the network can dramatically improve the performance of a solution, while realizing new revenue opportunities for equipment and service providers throughout the ecosystem.
Edge Inferencing Scalability with Intel Vision Accelerator Design Cards
Are you trying to deploy AI solutions at the edge, but running into scalability challenges that are making it difficult to meet your performance, power and price targets without creating multiple complex designs? Demand for AI at the edge is growing, but delivering on the potential of edge AI isn’t simple. There’s no one-size-fits-all approach. Even within a single application, there are often diverse use cases and environments, resulting in widely varying cost, performance, power and form factor requirements. In this talk, Rama Karamsetty, Global Marketing Manager at Intel, examines real customer use cases illustrating the challenges of designing scalable and flexible edge AI solutions. He also showcases the broad range of new Intel-based vision accelerator cards offered by Intel’s ecosystem partners. And he illustrates the value these Intel cards bring to meet the cost, performance, power and scalability needs of a wide range of applications.
PROCESSING ON PROGRAMMABLE LOGIC
Machine-Learning-Based Perception on a Tiny, Low-Power FPGA
In this tutorial, Hoon Choi, Fellow at Lattice Semiconductor, presents a set of machine-learning-based perception solutions that his company implemented on a tiny (5.4 mm2 package), low-power FPGA. These solutions include hand gesture classification, human detection and counting, local face identification, location feature extraction, front-facing human detection and shoulder surfing detection, among others. Choi describes Lattice’s compact processing engine structure that fits into fewer than 5K FPGA look-up tables, yet can support networks of various sizes. He also describes how Lattice selected networks and the optimizations the company used to make them suitable for low-power and low-cost edge applications. Last but not least, he also describes how Lattice leverages the on-the-fly self-reconfiguration capability of FPGAs to enable running a sequence of processing engines and neural networks in a single FPGA.
Vitis and Vitis AI: Application Acceleration from Cloud to Edge
Xilinx SoCs and FPGAs provide significant advantages in throughput, latency, and energy efficiency for production deployments of compute-intensive applications when compared to CPUs and GPUs. Over the last decade, FPGAs have evolved into highly configurable devices that provide on-chip heterogeneous multi-core CPUs, domain-specific programmable accelerators and “any-to-any” interface connectivity. Today, the Xilinx Vitis Unified Software Platform supports high-level programming in C, C++, OpenCL, and Python, enabling developers to build and seamlessly deploy applications on Xilinx platforms including Alveo cards, FPGA instances in the cloud, and embedded devices. Moreover, Vitis enables the acceleration of large-scale data processing and machine learning applications using familiar high-level frameworks, such as TensorFlow and SPARK. This presentation from Vinod Kathail, Fellow and Chief Architect at Xilinx, provides an overview of the Vitis Software platform and the accelerated Vitis Vision Library, which enables customizable functions such as image signal processing, adaptable AI inference, 3D reconstruction and motion analysis.
Morpho Semantic Filtering (Best AI Software or Algorithm)
Morpho’s Semantic Filtering is the 2020 Vision Product of the Year Award Winner in the AI Software and Algorithms category. Semantic Filtering improves camera image quality by combining the best of AI-based segmentation and pixel processing filters. In conventional imaging, computational photography algorithms are typically applied to the entire image, which can sometimes cause unwanted side effects such as loss of detail and textures, as well as in the appearance of noise in certain areas. Morpho’s Semantic Filtering is trained to identify the meaning of each pixel in the object of interest, allowing the application of the right algorithm for each category, with different strength levels that are most effective to achieve the best image quality for still-image capture.
Please see here for more information on Morpho and its Semantic Filtering. The Edge AI and Vision Product of the Year Awards (an expansion of previous years’ Vision Product of the Year Awards) celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes your leadership in edge AI and computer vision as evaluated by independent industry experts. The Edge AI and Vision Alliance is now accepting applications for the 2021 Awards competition; for more information and to enter, please see the program page.
Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.