Dear Colleague,2023 Embedded Vision Summit

Registration is now open for the 2023 Embedded Vision Summit, coming up May 22-25 in Santa Clara, California! The Summit is the premier conference and tradeshow for innovators incorporating computer vision and visual and perceptual AI in products. The program is designed to cover the most important technical and business aspects of practical computer vision, deep learning and perceptual AI. Register now using discount code SUMMIT23-NL and you can save 25%: don’t delay!

The Vision Tank is the Edge AI and Vision Alliance’s annual start-up competition, showcasing the best new ventures using computer vision or visual AI in their products or services. Open to early-stage companies, entrants are judged on four criteria: technology innovation, business plan, team and business opportunity. The competition is intended for start-ups that:

  • Have an initial product or prototype
  • Have ~15 or fewer people
  • Have raised less than ~$2M in capital

The Vision Tank final round takes place live on stage during the Embedded Vision Summit. Winners receive:

  • A $5,000 cash award
  • Membership in the Edge AI and Vision Alliance for one year

They also:

  • Present their new products or product ideas to more than 1,400 influencers and product creators at the 2022 Embedded Vision Summit
  • Build brand awareness and visibility through Alliance marketing channels
  • Benefit from advice from top industry experts
  • Gain introductions to potential investors, customers, employees and suppliers.

For more information and to enter, please see the program page. The submission deadline is March 3 and the application requires detailed information, so don’t delay!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


Privacy: A Surmountable Challenge for Computer VisionSanta Clara University
Ethical concerns about privacy come with the territory of computer vision—after all, it’s hard to deny the privacy implications when considering the digital images or videos that are the bread and butter of these applications. But privacy concerns need not be a roadblock that stands in the way of technological innovation. While the idea of “informed consent” has been a popular method of preserving privacy in some domains, it may not always be either possible or desirable in the context of emerging technologies like computer vision. In this presentation, Susan Kennedy, Assistant Professor of Philosophy at Santa Clara University, considers a more flexible approach to preserving privacy that pays special attention to the context that a device is being designed for. In addition, she reviews several examples that demonstrate how small changes to the design of technology can make a big difference when it comes to preserving privacy.

What Happens When Your Speed of AI Innovation Exceeds Your Ability to See the Regulatory Challenges Ahead?Dorsey & Whitney, LLP
Edge AI and computer vision, if not thoughtfully developed and applied, will trigger extraordinary scrutiny by legislative policy makers and regulators; indeed, we are already seeing a deluge of proposed AI regulatory schemes across the globe. Almost all of these are in the early stages, as government regulators are still struggling to grasp the potential uses, consequences, and thus regulatory implications, of AI. That understanding will almost certainly lag the pace of development of AI technology by years. The most successful innovators need to predict their regulatory obligations well in advance of the actual regulations being promulgated. In this talk, Robert Cattanach, Partner and Cybersecurity Team Leader at Dorsey & Whitney, LLP, offers a high-level overview of the more significant regulatory initiatives in the EU and US, as well as providing practical guidelines for developers in the AI space to help them anticipate likely regulatory trends.


Introduction to Computer Vision with Convolutional Neural NetworksIntel
This presentation covers the basics of computer vision using convolutional neural networks (CNNs). Mohammad Haghighat, Senior AI Software Product Manager at Intel, begins by introducing some important conventional computer vision techniques and then transitions to explaining the basics of machine learning and CNNs, showing how CNNs are used in visual perception. Haghighat illustrates the building blocks and computational elements of neural networks through examples that give an overview of how modern computer vision algorithms are designed, trained and used in real-world applications.

Compound CNNs for Improved Classification AccuracySouthern Illinois University Carbondale
In this talk, Spyros Tragoudas, Professor and School Director of Southern Illinois University Carbondale, presents a novel approach to improving the accuracy of convolutional neural networks (CNNs) used for classification. The approach utilizes the confusion matrix of the original CNN on a specific dataset to identify sets of low accuracy classes that resemble each other with respect to the error distribution. Using this information, several shallow networks are generated which operate in parallel with each other and evaluate input frames before the frames reach the original, large CNN. The shallow networks are able to classify the low-accuracy classes more accurately than the original network, while eliminating the need to run the larger network on certain images. Hence, by combining the shallow networks with the original network, accuracy is improved, with virtually no increase in inference time.


Short-wave Infrared: The Dawn of a New Imaging Age? – Yole Group Webinar: March 2, 2023, 9:00 am PT

Embedded Vision Summit: May 22-25, 2023, Santa Clara, California

More Events


Teledyne e2v Releases Hydra3D+, a High Resolution ToF Sensor that Works in Varied Light Conditions Without Motion Artifacts

STMicroelectronics Unveils MCU Edge-AI Developer Cloud

Immervision Announces Automotive Grade Lens for In-cabin Vision Systems

Axelera AI Announces Metis AI Platform

Syntiant Introduces Production-ready Edge AI Software Solutions for Image Detection, Tracking and Classification

More News


Luxonis OAK-D-Lite (Best Camera or Sensor)Luxonis
Luxonis’ OAK-D-Lite is the 2022 Edge AI and Vision Product of the Year Award winner in the Cameras and Sensors category. OAK-D-Lite is Luxonis’ next-generation spatial AI camera. It can run AI and CV on-device and fuse these results with stereo disparity depth perception to provide spatial coordinates of detected objects or features it detects. OAK-D-Lite combines the power of the Intel Myriad X Visual Processing Unit with a 4K (13 Mpixel) color camera and 480P stereo depth cameras, and can produce 300k depth points at up to 200 FPS. It has an USB-C connector for power delivery and communication with the host computer, and its 4.5 W max power consumption is ideal for low-power applications. It has a baseline distance of 7.5 cm so it can perceive depth from 20 cm up to 15 m. OAK-D-Lite is an entry-level device designed to be accessible to anyone, from corporations to students. Its tiny form factor can fit just about anywhere, including in your pocket, and it comes with a sleek front gorilla-glass cover. OAK-D-Lite is offered at MSRP of $149.

Please see here for more information on Luxonis’ OAK-D-Lite. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top