scroll to learn more or view by subtopic
The listing below showcases the most recently published content associated with various AI and visual intelligence functions.
View all Posts
“Neuromorphic Event-based Vision: From Disruption to Adoption at Scale,” a Presentation from Prophesee
Luca Verre, Co-founder and CEO of Prophesee, presents the “Neuromorphic Event-based Vision: From Disruption to Adoption at Scale” tutorial at the May 2019 Embedded Vision Summit. Neuromorphic event-based vision is a new paradigm in imaging technology, inspired by human biology. It promises to dramatically improve machines’ ability to sense their environments and make intelligent decisions
Ilya Brailovskiy, Principal Engineer, and Changsoo Jeong, Head of Algorithm, both of Ring, present the "Optimizing SSD Object Detection for Low-power Devices" tutorial at the May 2019 Embedded Vision Summit. In this talk, Brailovskiy and Jeong discuss how Ring designs smart home video cameras to make neighborhoods safer. In particular, they focus on three key
William “Bill” Santana Li, Co-founder, Chairman and CEO of Knightscope, talks with Vin Ratford, Executive Director of the Embedded Vision Alliance, for the “Visual AI Enables Autonomous Security” interview at the May 2019 Embedded Vision Summit. Knightscope, a physical security technologies company based in Silicon Valley, develops and sells a line of autonomous robots that
Minje Park, Deep Learning R&D Engineer at Intel, presents the "Object Trackers: Approaches and Applications" tutorial at the May 2019 Embedded Vision Summit. Object tracking is a powerful algorithm component and one of the fundamental building blocks for many real-world computer vision applications. Object trackers provide two main benefits when incorporated into a localization module.
“The Reality of Spatial Computing: What’s Working in 2019 (And Where It Goes From Here),” a Presentation from Digi-Capital
Tim Merel, Managing Director at Digi-Capital, presents the "Reality of Spatial Computing: What’s Working in 2019 (And Where It Goes From Here)" tutorial at the May 2019 Embedded Vision Summit. This presentation gives you hard data and lessons learned on what is and isn’t working in augmented reality and virtual reality today, as well as
The Embedded Vision Summit was held on May 20-23, 2019 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2019 Embedded Vision Summit
Augmented reality (AR) and related technologies and products are becoming increasingly popular and prevalent, led by their adoption in smartphones, tablets and other mobile computing and communications devices. While developers of more deeply embedded platforms are also motivated to incorporate AR capabilities in their products, the comparative scarcity of processing, memory, storage, and networking resources
“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill
Alexander C Berg, Associate Professor at the University of North Carolina at Chapel Hill and CTO of Shopagon, presents the “Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors” tutorial at the May 2018 Embedded Vision Summit. Berg’s group’s 2016 work on single-shot object detection (SSD) reduced the computation cost for accurate detection of object
YoungWoo Seo, Senior Director at Virgin Hyperloop One, presents the “Building a Typical Visual SLAM Pipeline” tutorial at the May 2018 Embedded Vision Summit. Maps are important for both human and robot navigation. SLAM (simultaneous localization and mapping) is one of the core techniques for map-based navigation. As SLAM algorithms have matured and hardware has
Timo Ahonen, Director of Engineering for Computer Vision at Meta, presents the “Visual-Inertial Tracking for AR and VR” tutorial at the May 2018 Embedded Vision Summit. This tutorial covers the main current approaches to solving the problem of tracking the motion of a display for AR and VR use cases. Ahonen covers methods for inside-out