Dear Colleague,2022 Vision Tank

Do you work at an early-stage start-up company developing a new product or service incorporating or enabling computer vision or visual AI? Do you want to raise awareness of your company and products with potential customers, investors and partners? Enter the seventh annual Vision Tank Start-Up Competition, offering start-up companies recognition and rewards, including the opportunity to present their new products or product ideas to more than 1,000 influencers and product creators at the Embedded Vision Summit, the key event for system and application developers who are incorporating computer vision and visual AI into products. Best of all, entry is free! For more information on the Vision Tank Start-Up Competition and to enter, please see the program page. Applications are due by March 1.

Registration for the Embedded Vision Summit, taking place May 17-19 in Santa Clara, California, is also now open, and if you register by March 11, you can save 25% by using the code SUMMIT22-NL.

On Thursday, March 17 at 9 am PT, Network Optix will deliver the free webinar “A Platform Approach to Developing Intelligent Video Products” in partnership with the Edge AI and Vision Alliance. IP cameras — traditionally used in surveillance applications — have become a mainstay in a variety of applications to visually (and often also audibly) monitor people, facilities, and other objects and environments. When combined with AI-enabled computer vision applications, these diminutive devices — together with USB webcams, wearable cameras, and production monitoring cameras — become data-gathering behemoths, enabling an entire universe of new Google-for-the-real-world applications driven primarily by video. With the density of data now able to be captured by video cameras — faces, people, objects, vehicles, and more — companies are recognizing the value of emerging edge and AI technologies that focus on capturing, analyzing, streaming, and delivering high definition video and its related metadata. But developing full-featured, full-stack solutions that can automatically recognize and manage devices, can accommodate an existing tech infrastructure, and can scale to any size for any set of customer personas is a difficult task.

Network Optix believes it has developed a platform that can alleviate the pain of developing intelligent video products and that is malleable enough to be used for any potential downstream intelligent video-enabled application — Nx Meta. This webinar will explore Network Optix’ Nx Meta intelligent video platform and explain how you can rapidly develop a full-stack (cloud, desktop, mobile, and server), AI-enabled, edge-ready, enterprise video product in a matter of weeks. You’ll learn how easy it is to deploy rich new visual intelligence products at global scale by choosing a platform that scales effortlessly with you. Presented by Nathan Wheeler, Chairman and CEO of Network Optix, it will begin with an introduction to the company and some of the ‘Powered by Nx’ products currently in use around the world. Wheeler will also discuss numerous examples of vertically aligned and industry-specific computer vision product use cases. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


Getting Started with Vision AI Model TrainingNVIDIA
In many modern vision and graphics applications, deep neural networks (DNNs) enable state-of-the-art performance for tasks like image classification, object detection and segmentation, quality enhancement and even new content generation. In this talk, Ekaterina Sirazitdinova, Data Scientist at NVIDIA, demystifies basic concepts behind DNN training: from the definition of a deep neural network to critical parameters controlling deep learning model training. Sirazitdinova examines problems typically encountered when training a model and shares best practices for their mitigation. Finally, she sheds some light on commonly used software frameworks.

Automated Neural Network Model Training: The Impact on Deploying and Scaling ML at the EdgeArm
Neural networks are being used to solve an ever-increasing number of use cases, elevating the importance of efficient model training and ongoing model maintenance. The value of a machine learning solution is easy to demonstrate in the lab but quickly diminishes when the end-user considers ongoing updates and the challenge of retraining models for changing real-world environments. This presentation from Tim Hartley, Vice President of Product and Marketing at SeeChange Technologies (an Arm company), looks at the benefits of using automated learning technologies with a federated gathering of training data from the outset so that models can self-tune and update, sometimes with zero-touch from the end-user. Often, by considering gathering additional insight at the edge, sufficient clues can be gathered to identify and auto-label training data. Hartley looks at the impact on product rollout, configuration, maintenance and overall product effectiveness. He ends the presentation by looking at techniques for commoditization, bringing the required edge and cloud components together so that future products can more easily benefit from these technologies.


Can You Make Production ML Work Without Dozens of PhDs?Edge Impulse
Machine learning is making it possible to give machines capabilities that were unthinkable just a few years ago. But the techniques for implementing, deploying and maintaining machine learning algorithms and software in products are radically different from the tried-and-true techniques that system developers have used for decades. Successful adoption of machine learning requires a different way of thinking about algorithms, data and software—and different sets of skills. Some companies are fortunate enough to have dozens of machine learning and data science PhDs to aid their product development efforts, but most development groups don’t include even one such expert. Can product development groups successfully deploy robust ML capabilities at the edge without the help of expert specialists? What are the key obstacles to doing so? What approaches are proving effective in making practical ML accessible to the broad community of system developers? This lively discussion, moderated by Christopher Rommel, Executive Vice President for IoT and Industrial Technology at VDC Research, along with panelists Zach Shelby, Co-founder and CEO of Edge Impulse; Jason Lavene, Senior Principal Architect at Keurig Dr. Pepper; and Rob Oshana, Vice President of Software R&D for Edge Processing at NXP, provides perspectives from seasoned pros who are working at the leading edge of ML system development, tools and techniques.

Performance, Ease of Development or Features: What Do Inferencing Software Developers Need Most?Intel
In this fireside chat, Soren Knudsen, Lead Planner in the OpenVINO Team, and Raymond Lo, OpenVINO Edge AI Software Evangelist, both of Intel, discuss what they have learned about what developers need from AI tools. How does the OpenVINO team get feedback from developers? (Do they even want it?) How do they then prioritize what goes into their tools, given many competing requirements? Watch and learn how Intel is evolving its edge inference tools, and how you can influence the future of the tools you use—so that while you are busy making your customers’ lives better, someone is working on making your life better, too.


Developing Optimized Systems with BrainChip’s Akida Neuromorphic Processor – BrainChip Webinar: February 24, 2022, 9:00 am PT

A Platform Approach to Developing Intelligent Video Products – Network Optix Webinar: March 17, 2022, 9:00 am PT

Embedded Vision Summit: May 17-19, 2022, Santa Clara, California

More Events


Teledyne FLIR launches Conservator Subscription Software to Accelerate AI Development with Thermal Imaging

Real-time Video Enhancement Reaches New Standard with Visionary.ai and Inuitive Partnership

Upcoming Online Training from FRAMOS Explores CMOS Camera Characterization

Lattice Semiconductor FPGAs Power Next-generation Lenovo Edge/AI Experiences

ADLINK Technology Launches COM-HPC Client Type and COM Express Type 6 Modules with 12th Generation Intel Core Processors

More News


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top