Technologies
scroll to learn more or view by subtopic
The listing below showcases the most recently published content associated with various AI and visual intelligence functions.
View all Posts

“Object Trackers: Approaches and Applications,” a Presentation from Intel
Minje Park, Deep Learning R&D Engineer at Intel, presents the "Object Trackers: Approaches and Applications" tutorial at the May 2019 Embedded Vision Summit. Object tracking is a powerful algorithm component and one of the fundamental building blocks for many real-world computer vision applications. Object trackers provide two main benefits when incorporated into a localization module.

“The Reality of Spatial Computing: What’s Working in 2019 (And Where It Goes From Here),” a Presentation from Digi-Capital
Tim Merel, Managing Director at Digi-Capital, presents the "Reality of Spatial Computing: What’s Working in 2019 (And Where It Goes From Here)" tutorial at the May 2019 Embedded Vision Summit. This presentation gives you hard data and lessons learned on what is and isn’t working in augmented reality and virtual reality today, as well as

May 2019 Embedded Vision Summit Slides
The Embedded Vision Summit was held on May 20-23, 2019 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2019 Embedded Vision Summit
Computer Vision for Augmented Reality in Embedded Designs
Augmented reality (AR) and related technologies and products are becoming increasingly popular and prevalent, led by their adoption in smartphones, tablets and other mobile computing and communications devices. While developers of more deeply embedded platforms are also motivated to incorporate AR capabilities in their products, the comparative scarcity of processing, memory, storage, and networking resources

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill
Alexander C Berg, Associate Professor at the University of North Carolina at Chapel Hill and CTO of Shopagon, presents the “Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors” tutorial at the May 2018 Embedded Vision Summit. Berg’s group’s 2016 work on single-shot object detection (SSD) reduced the computation cost for accurate detection of object

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One
YoungWoo Seo, Senior Director at Virgin Hyperloop One, presents the “Building a Typical Visual SLAM Pipeline” tutorial at the May 2018 Embedded Vision Summit. Maps are important for both human and robot navigation. SLAM (simultaneous localization and mapping) is one of the core techniques for map-based navigation. As SLAM algorithms have matured and hardware has

“Visual-Inertial Tracking for AR and VR,” a Presentation from Meta
Timo Ahonen, Director of Engineering for Computer Vision at Meta, presents the “Visual-Inertial Tracking for AR and VR” tutorial at the May 2018 Embedded Vision Summit. This tutorial covers the main current approaches to solving the problem of tracking the motion of a display for AR and VR use cases. Ahonen covers methods for inside-out

“Understanding and Implementing Face Landmark Detection and Tracking,” a Presentation from PathPartner Technology
Jayachandra Dakala, Technical Architect at PathPartner Technology, presents the “Understanding and Implementing Face Landmark Detection and Tracking” tutorial at the May 2018 Embedded Vision Summit. Face landmark detection is of profound interest in computer vision, because it enables tasks ranging from facial expression recognition to understanding human behavior. Face landmark detection and tracking can be

“Words, Pictures, and Common Sense: Visual Question Answering,” a Presentation from Facebook and Georgia Tech
Devi Parikh, Research Scientist at Facebook AI Research (FAIR) and Assistant Professor at Georgia Tech, presents the “Words, Pictures, and Common Sense: Visual Question Answering” tutorial at the May 2018 Embedded Vision Summit. Wouldn’t it be nice if machines could understand content in images and communicate this understanding as effectively as humans? Such technology would

“Creating a Computationally Efficient Embedded CNN Face Recognizer,” a Presentation from PathPartner Technology
Praveen G.B., Technical Lead at PathPartner Technology, presents the “Creating a Computationally Efficient Embedded CNN Face Recognizer” tutorial at the May 2018 Embedded Vision Summit. Face recognition systems have made great progress thanks to availability of data, deep learning algorithms and better image sensors. Face recognition systems should be tolerant of variations in illumination, pose