Dear Colleague,2021 Embedded Vision Summit

The Embedded Vision Summit is quickly approaching (May 25-28), and I just took a look at our long (and growing) list of exhibitors. All I can say is wow! These top suppliers are creating cutting-edge building-block technologies that are accelerating the future of edge AI and vision, and they’ll help you find the capabilities you’re looking for so that you can gain a competitive edge.

At the Summit, you can learn more about them and connect by:

  • Watching demos and seeing the suppliers reveal their latest processors, software, development tools and much more
  • Scheduling 1:1 meetings with their tech experts to see how their technologies will work for you, and
  • Visiting their kiosks to gather the information you need.

Whether you’re looking for embedded processors, AI accelerators, software tools, FPGAs, libraries, algorithms, boards, modules, licensable IP, or services, these exhibitors have you covered. Come check out their latest products and meet your next game-changing supplier! See the full list of exhibitors. Then check out the entire event program. And then register for the Summit today!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance


Tackling Extreme Visual Conditions for Autonomous UAVs In the WildSkydio
Skydio ships autonomous robots that are flown at scale in complex, unknown environments every day to capture incredible video, automate dangerous inspections and save lives of first responders. They must make decisions at high speed using just their onboard cameras and algorithms running on low-cost hardware. The company has invested five years of R&D into handling extreme visual scenarios not typically considered by academia nor encountered by cars, ground-based robots or AR applications. Drones are commonly used in scenes with few or no semantic priors on the environment and must deftly navigate in the presence of thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, vibrations, dirt, smudges and fog. Because photometric signals are not consistent, these challenges are daunting for both classical vision approaches and unsupervised learning. And there is no ground truth for direct supervision. In this presentation, Hayk Martiros, Head of Autonomy at Skydio, takes a detailed look at these issues and how his company tackled them to push autonomous flight into production.

Designing Cameras to Detect the “Invisible”: Handling Edge Cases Without SupervisionAlgolux
Vision systems play an essential role in safety-critical applications, such as advanced driver assistance systems, autonomous vehicles, video security, and fleet management. However, today’s imaging and vision stacks fundamentally rely on supervised training, making it challenging to handle the “edge cases” with datasets that are naturally biased—for example unusual scenes, object types, and environmental conditions, such as rare, dense fog and snow. In this talk, Felix Heide, Co-Founder and Chief Technology Officer at Algolux, introduces the computational imaging and computer vision approaches being used by Algolux to handle such edge cases. Instead of relying purely on supervised downstream networks to become more robust by seeing more training data, Algolux rethinks the camera design itself and optimizes new processing stacks, from photon to detection, that jointly solve this problem. Heide shows how such co-designed cameras, using the company’s Eos camera stack, outperform public and commercial vision systems. He also shows how the same approach can be applied to depth imaging, allowing Algolux to extract accurate, dense depth via low-cost CMOS gated imagers (in the process beating scanning lidar).


Smarter Manufacturing with Deep Learning-Based Machine VisionIntel
As demand for smarter and more efficient manufacturing is growing, IoT technologies⁠—including sensors, edge devices, gateways, servers and the cloud⁠—are being used throughout the factory to compute deep learning analytics workloads at the appropriate location. Efficient data-driven manufacturing can help to reduce labor costs, increase quality and maximize profit. The biggest hindrance to achieving these outcomes is the difficulty in extracting data from vendor-locked and proprietary systems for analytics downstream.

In this presentation, Tara Thimmanaik, Solutions Architect at Intel, covers Intel’s approach to developing open, flexible and scalable solutions, including:

  • Intel’s technologies such as OpenVINO, Movidius Vision Processor Units, Edge Insights Software (EIS) and deep learning algorithms
  • How Intel’s offerings come together in the industrial marketplace with partnerships forged to address the constraints of manufacturing infrastructure
  • Real-world examples highlighting defect detection in textile printing (where 90% accuracy at 50 fps was achieved) and smartphone screen production (where false negatives were only 0.6%)

High-Bandwidth Multicamera Systems with PCIe BackboneXIMEA
In this presentation, XIMEA’S Kevin Toerne explores high-bandwidth cameras and their application to multi-camera and embedded systems. He presents solutions for machine vision applications, focused on the use of PCIe as a camera interface for embedded and untethered applications. He examines the current and future state of PCIe speeds and connectivity, with examples of available and upcoming hardware. Camera users implementing high-speed or multi-camera applications will learn how a PCIe interface is useful for design, build and implementation of these systems.


Embedded Vision Summit: May 25-28, 2021

More Events


Cadence Extends Tensilica Vision and AI DSP IP Product Line with New DSPs Targeting High-End and Always-On Applications

Xilinx Introduces Kria Portfolio of Adaptive System-on-Modules for Accelerating AI Applications at the Edge

BrainChip Begins Volume Production of Akida AI Processor

Khronos Gathers Requirements for Embedded Camera and Sensor API Standards

Intel’s 11th Generation Core Processors Bring Enhanced Support for AI Inference Acceleration

More News


The Khronos GroupThe Khronos Group
The Khronos Group is an open industry consortium of over 150 leading hardware and software companies creating advanced, royalty-free, acceleration standards for 3D graphics, Augmented and Virtual Reality, vision and machine learning. Khronos standards include Vulkan, Vulkan SC, OpenGL, OpenGL ES, OpenGL SC, WebGL, SPIR-V, OpenCL, SYCL, OpenVX, NNEF, OpenXR, 3D Commerce, ANARI, and glTF.


Vision Spectra (Photonics)Vision Spectra (Photonics)
Vision Spectra magazine covers the latest innovations that are transforming today’s manufacturing landscape: neural networking, 3D sensing, embedded vision and more. Each issue includes rich content on a range of topics, with an in-depth look at how vision technologies are transforming industries from food and beverage to automotive and beyond. Information is presented with integrators, designers, and end-users in mind. Subscribe for free today!


Attend the Embedded Vision Summit to meet these and other leading computer vision and edge AI technology suppliers!

Edge ImpulseEdge Impulse
Edge Impulse is a leading development platform for machine learning on edge devices. The company’s mission is to enable every developer and device maker with the best development and deployment experience for machine learning on the edge, focusing on sensor, audio, and computer vision applications.


Intel is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, Intel continuously works to advance the design and manufacturing of semiconductors to help address its customers’ greatest challenges.


LAON PEOPLE provides a combined hardware and software machine vision solution using its own AI algorithm and image processing technology. The company’s state-of-the-art AI edge camera empowers various customized solutions with real time multimodal detection and analysis.


QualcommMicrosoft M12
For more than 30 years, Qualcomm has served as the essential accelerator of wireless technologies and the ever-growing mobile ecosystem. Now our inventions are set to transform other industries by bringing connectivity, machine vision and intelligence to billions of machines and objects, catalyzing the IoT.


Synaptics is the pioneer and leader of human interface solutions, bringing innovative and intuitive user experiences to intelligent devices. Synaptics’ broad portfolio of touch, display, biometrics, voice, audio, and multimedia products is built on the company’s rich R&D, extensive IP and dependable supply chain capabilities.


Synopsys is the Silicon to Software partner for innovative companies developing electronic products and software applications. Synopsys has a long history of being a global leader in EDA and semiconductor IP and is also growing its leadership in software security and quality solutions.


Visidon is a leading company for providing imaging solutions for mobile cameras. The company’s products are high-class software technologies, such as face recognition, video stabilization, dual camera processing, and image enhancement; customers include top mobile phone OEMs around the world.


Xilinx has led the industry in developing platforms that help hardware and software developers and system architects accelerate computer vision and other edge AI algorithms. The company continues to invest in a host of new technologies specifically tailored for edge AI and vision applications.


Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top