The Embedded Vision Summit, returning to an in-person format this year in Santa Clara, California, is the key event for system and application developers who are incorporating computer vision and visual AI into products. It attracts a unique audience of over 1,000 product creators, entrepreneurs and business decision-makers who are creating and using computer vision and visual AI technologies. It’s a unique venue for learning, sharing insights and getting the word out about interesting new technologies, techniques, applications, products and practical breakthroughs in computer vision and visual AI.
Once again we’ll be offering a packed program with 85+ sessions, 55+ technology exhibits, and 100+ demos, all covering the technical and business aspects of practical computer vision, deep learning, visual AI and related technologies. And new for 2022 are the Edge AI Deep Dive Days, a series of in-depth sessions focused on specific topics in visual AI at the edge. Registration is now open, and if you register by this Friday, April 22, you can save 15% by using the code SUMMIT22-NL. Register now and tell a friend! You won’t want to miss what is shaping up to be our best Summit yet.
Editor-In-Chief, Edge AI and Vision Alliance
IMAGE SENSING’S EVOLUTION AND OPTIONS
The Transformation from Imaging to Sensing: Driving a Market Revolution
Over the past 20 years, digital imaging has grown to become a huge industry with a focus on producing images for human consumption. More recently, the emphasis has begun shifting to using images as sensory inputs to machines. In this talk from last year’s Embedded Vision Summit, Pierre Cambou, Principal Analyst at Yole Développement, explores how this shift is transforming the imaging industry. Cambou examines market dynamics in the mobile, consumer, computing, automotive, medical, security, industrial, and aerospace and defense segments. He explains how image sensor sales are being affected by this shift, using the example of 3D face recognition in mobile. He also discusses how image-related computing is being impacted. For example, while in the past most devices had to incorporate some kind of image signal processor, now the vision processor is becoming the new imperative.
Alternative Image Sensors for Intelligent In-Cabin Monitoring, Home Security and Smart Devices
The traditional approach for in-cabin-monitoring uses cameras that capture only visible or near-infrared (NIR) light and are designed to represent a scene as closely as possible to what a human expects to see at a constant frame rate. But visible or NIR light represents only a small fraction of the information available to us, and frames gather both wanted and unwanted information without regard to changes in scenes, wasting computation and missing important temporal details. Alternative sensing paradigms such as event cameras and thermal cameras can be used to overcome some of these limits and enable features that would not be possible with a conventional camera. This presentation from Petronel Bigioi, CTO for Product Licensing at Xperi, details the use of alternative image sensors for enabling new features and capabilities for in-cabin monitoring, home surveillance and smart cameras. Improved energy efficiency, better results in low light conditions and new safety features are some of the key benefits of these alternative sensing methods.
VISION FOR AUTONOMOUS AIRCRAFT
Productizing Complex Visual AI Systems for Autonomous Flight
The development of visual AI systems for real-world applications is a complex undertaking characterized by a variety of diverse challenges. While the media spotlight is often focused on academic AI models that improve performance based on well-defined datasets, in many instances insufficient attention is dedicated to the engineering complexity of productizing real-world applications. Carlo Dal Mutto, Director of Engineering at Airbus, begins this presentation with an overview of key topics that must be addressed in productizing complex visual AI systems, including the definition of system requirements; hardware and software design; data acquisition, labelling and management; and AI model development, deployment, validation and maintenance. Next, he delves into software design and AI model deployment in greater detail. He illustrates key challenges and promising techniques via practical examples and results from his company’s work delivering visual AI systems for autonomous flight as part of Project Wayfinder at Acubed, Airbus’ innovation center in Silicon Valley.
Building an Autonomous Detect-and-Avoid System for Commercial Drones
Commercial and industrial drones have the potential to completely disrupt industries and create new ones. Used in applications such as infrastructure inspection, search and rescue, package delivery, and many others, they can save time, money, and lives. Most of these applications require a real-time understanding of the environment and the risks of collision. At the same time, commercial drones are limited in the size, weight, and power they can carry, narrowing the options for sensors and computing architectures. In this presentation, Alejandro Galindo, Head of Research and Development at Iris Automation, dives into what it takes to build an autonomous detect-and-avoid system for commercial drones and, in particular, focuses on computer vision issues such as predictability and reduction of false positives. Why are they important and what does it take to drive them in the right direction?
Oculi: Putting the ‘Human Eye’ in AI
Do you want to be part of an experienced team of semiconductor and vision experts developing and commercializing a new class of efficient vision AI? Oculi develops a novel vision architecture enabling new technologies in computer vision for digital signage, gaming, interactive displays, laptops, AR/VR, as well as IoT, smart phone, mobility, industrial, and defense. Apply now!
EMBEDDED VISION SUMMIT PARTNER SHOWCASE
Hackster, an Avnet community, is the world’s largest developer community for learning, programming, and building hardware with 1.9M+ members and 30K+ open source projects.
Vision Systems Design
Vision Systems Design is the machine vision and imaging resource for engineers and integrators worldwide. Receive unique, unbiased and in-depth technical information about the design of machine vision and imaging systems for demanding applications in your inbox today.
Edge Impulse is a leading development platform for machine learning on edge devices. The company’s mission is to enable every developer and device maker with the best development and deployment experience for machine learning on the edge, focusing on sensor, audio, and computer vision applications.
For more than 30 years, Qualcomm has served as the essential accelerator of wireless technologies and the ever-growing mobile ecosystem. Now our inventions are set to transform other industries by bringing connectivity, machine vision and intelligence to billions of machines and objects, catalyzing the IoT.
Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.