The Embedded Vision Summit is thrilled to host the 2023 Deep Dive Day on Monday, May 22. Each Deep Dive is an intensive, three-hour in-person session led by industry experts, designed to explore specific subjects related to visual, perceptual, and edge AI. Tickets are priced at $25 each and can be purchased with or without a 2023 Embedded Vision Summit pass — and yes, it’s possible to attend more than one session!
This year’s Deep Dive sessions are sponsored and delivered by the following Alliance Member companies, and will cover the following topics:
This is a great opportunity to explore these important topics in greater depth than you might get from a typical presentation or YouTube video. Deep Dive session online registrations are now open; we look forward to seeing you on May 22!
Time-pressed to see the latest in new tools and techniques in perceptual AI? Looking to see what’s on the horizon in enabling technologies for embedded computer vision? Need to keep an eye on the competition? If so, you should be at the 2023 Embedded Vision Summit, happening May 22-24 in Santa Clara, California.
The Summit is the event for practical computer vision and edge AI; you don’t want to miss it! Register now using discount code SUMMIT23-NL and you can save 15%. Don’t delay!
Editor-In-Chief, Edge AI and Vision Alliance
Selecting the Right Camera for Your Embedded Computer Vision Project
The right camera can be key to the success of a computer vision project. For example, in an object detection or segmentation task, you can gain much more in performance by producing images with high contrast between your objects of interest and the background than by having your deep neural network do its best with sub-par footage. It’s easy to underestimate the complexity of selecting and integrating the right camera for your application, and the numerous important considerations can seem overwhelming. In this talk, Adrián Márques, Managing Partner at Digital Sense, provides an introduction to the main factors to consider when choosing a camera, and shares practical considerations you can apply to your project.
A Flexible Software Ecosystem and Marketplace for Hybrid AI Vision Solutions
For most vision applications, the best solution combines classical vision algorithms with AI techniques. In practice, developing hybrid solutions is challenging. For example, different tools are typically used for different types of algorithms. And creating efficient implementations for embedded processors poses additional challenges, because what works in the cloud or on a PC often is not suited for an embedded target. Optimized implementations of individual algorithms are often available for certain processors, but these are not portable to other processors, and are not designed to interoperate with each other. In this presentation, Bastian Steinbach, Head of Software Product Management at Basler, introduces a flexible software ecosystem that works across various hardware target systems, providing vision system developers with a “no-code” UI and tools to streamline the implementation of image acquisition, classical vision algorithms and AI inference. This enables software providers to easily adapt their software to various deployment targets and empowers processor manufacturers to enable their customers to develop computer vision solutions quickly.
May 2022 Embedded Vision Summit Vision Tank Finalist Competition
In this video, Nima Shei, Founder and CEO of Hummingbirds AI, Mohamed Elwazer, Founder and CEO of linedanceAI, Charbel Rizk, Founder and CEO of Oculi, Faris Alqadah, Founder and CEO of Qlairvoyance, and Robert Laganiere, Chief Scientific Officer of Tempo Analytics, deliver their Vision Tank finalist presentations at the May 2022 Embedded Vision Summit. The Vision Tank introduces companies that incorporate visual intelligence in their products in an innovative way and who are looking for investment, partnerships, technology, and customers. In a lively, engaging, and interactive format, these companies compete for awards and prizes as well as benefiting from the feedback of an expert panel of judges: Vin Ratford, CEO of Piera Systems and Executive Director of the Edge AI and Vision Alliance, Shweta Shirvastava, Senior Product Leader at Waymo, Forrest Iandola, Head of Perception at Anduril Industries, and John Feland, Master of Ceremonies and Data Whisper and Design Thinker.
Focus on Value, Not Valuation: A Crash Course in VC Trends and Fundraising
2021 was a record-breaking for founders and funders alike. US venture capitalists deployed a staggering $330B across nearly 17,000 deals, while simultaneously raising an additional $128B for new funds. During the same time period, the founders behind these venture-backed startups created $774B in exit value from IPOs, SPACs and acquisitions. Valuations reached all-time highs, the industry was awash in capital, and it was never easier to start a company, especially for founders innovating in areas like edge, AI/ML and computer vision. But is the party now over? More generally, what should a startup keep in mind when thinking about fundraising? In this talk from the May 2022 Embedded Vision Summit, Poole outlines the basic dynamics behind raising venture capital, establishes then-current market conditions, identifies startup fundraising trends and helps founders and early startup employees in edge AI and vision start-ups prepare themselves for the road ahead.
Attend the Embedded Vision Summit to meet this and other leading computer vision and edge AI technology suppliers!
For more than 30 years, Qualcomm has served as the essential accelerator of wireless technologies and the ever-growing mobile ecosystem. Now our inventions are set to transform other industries by bringing connectivity, machine vision and intelligence to billions of machines and objects, catalyzing the IoT.
Grabango Checkout-free Technology (Best Enterprise Edge AI End Product)
Grabango’s Checkout-free Technology was the 2022 Edge AI and Vision Product of the Year Award winner in the Enterprise Edge AI End Product category. Grabango’s system for existing, top-tier grocery and convenience stores is pure computer vision (CV), based on machine learning with targeted retrainings (AI). With a handful of other startups boasting a basic proof-of-concept here and there, Grabango has leapt ahead to announce 5 mature stores with Giant Eagle, 6 stores with Circle K, and 10 stores with bp, with additional announcements pending from two more convenience chains, and two major grocery chains. All of these stores are fast retrofits, serving the same customers as before installation. Since going live, Grabango remains the only provider delivering high-volume operations, in true retrofit settings, with exceptionally accurate receipts.
Please see here for more information on Grabango’s Checkout-free Technology. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.