Entertainment Applications for Embedded Vision
September 2020 Embedded Vision Summit Slides
The Embedded Vision Summit was held online on September 15-25, 2020, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in PDF form. To… September 2020 Embedded Vision Summit
“Fundamentals of Monocular SLAM,” a Presentation from Cadence
Shrinivas Gadkari, Design Engineering Director at Cadence, presents the “Fundamentals of Monocular SLAM” tutorial at the May 2019 Embedded Vision Summit. Simultaneous Localization and Mapping (SLAM) refers to a class of algorithms that enables a device with one or more cameras and/or other sensors to create an accurate map of its surroundings, to determine the
“Teaching Machines to See, Understand, Describe and Predict Sports Games in Real Time,” a Presentation from Sportlogiq
Mehrsan Javan, CTO of Sportlogiq, presents the “Teaching Machines to See, Understand, Describe and Predict Sports Games in Real Time” tutorial at the May 2019 Embedded Vision Summit. Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this means designing a fully-automated, robust, end-to-end pipeline; from visual input,
May 2019 Embedded Vision Summit Slides
The Embedded Vision Summit was held on May 20-23, 2019 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2019 Embedded Vision Summit
May 2018 Embedded Vision Summit Slides
The Embedded Vision Summit was held on May 21-24, 2018 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2018 Embedded Vision Summit
Computer Vision in Surround View Applications
The ability to "stitch" together (offline or in real-time) multiple images taken simultaneously by multiple cameras and/or sequentially by a single camera, in both cases capturing varying viewpoints of a scene, is becoming an increasingly appealing (if not necessary) capability in an expanding variety of applications. High quality of results is a critical requirement, one
“Using Computer Vision and Machine Learning to Understand Pet Behavior,” a Presentation from PetCube
Alex Neskin, founder and CTO of PetCube, delivers the presentation "Using Computer Vision and Machine Learning to Understand Pet Behavior" at the Embedded Vision Alliance's December 2017 Vision Industry and Technology Forum. Neskin explains how his start-up is using vision and AI to improve the lives of pets and their owners.
“Using Markerless Motion Capture to Win Baseball Games,” a Presentation from KinaTrax
Steven Cadavid, President of KinaTrax, presents the "Using Markerless Motion Capture to Win Baseball Games" tutorial at the May 2017 Embedded Vision Summit. KinaTrax develops a markerless motion capture system that computes the kinematic data of an in-game baseball pitch. The system is installed in several Major League Baseball ballparks including Wrigley Field, home of
“Making Cozmo See,” a Presentation from Anki
Andrew Stein, Lead Computer Vision Engineer at Anki, presents the "Making Cozmo See" tutorial at the May 2017 Embedded Vision Summit. In this presentation, Stein describes the vision capabilities of Cozmo, Anki's latest consumer robotics product. Cozmo is a sophisticated entertainment robot focused on personality, interactivity, and game play. It was one of the hottest
“Computer Vision and Machine Learning at the Edge,” a Presentation from Qualcomm Technologies
Michael Mangan, a member of the Product Manager Staff at Qualcomm Technologies, presents the "Computer Vision and Machine Learning at the Edge" tutorial at the May 2017 Embedded Vision Summit. Computer vision and machine learning techniques are applied to myriad use cases in smartphones today. As mobile technology expands beyond the smartphone vertical, both technologies
“Computer Vision and Machine Learning at the Edge,” a Presentation from Qualcomm Technologies
Michael Mangan, a member of the Product Manager Staff at Qualcomm Technologies, presents the "Computer Vision and Machine Learning at the Edge" tutorial at the May 2017 Embedded Vision Summit. Computer vision and machine learning techniques are applied to myriad use cases in smartphones today. As mobile technology expands beyond the smartphone vertical, both technologies
May 2017 Embedded Vision Summit Slides
The Embedded Vision Summit was held on May 1-3, 2017 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2017 Embedded Vision Summit
Facial Analysis Delivers Diverse Vision Processing Capabilities
Computers can learn a lot about a person from their face – even if they don’t uniquely identify that person. Assessments of age range, gender, ethnicity, gaze direction, attention span, emotional state and other attributes are all now possible at real-time speeds, via advanced algorithms running on cost-effective hardware. This article provides an overview of
Vision Processing Opportunities in Virtual Reality
VR (virtual reality) systems are beginning to incorporate practical computer vision techniques, dramatically improving the user experience as well as reducing system cost. This article provides an overview of embedded vision opportunities in virtual reality systems, such as environmental mapping, gesture interface, and eye tracking, along with implementation details. It also introduces an industry alliance
“What’s Hot in Embedded Vision for Investors?,” an Embedded Vision Summit Panel Discussion
Jeff Bier of the Embedded Vision Alliance (moderator), Don Faria of Intel Capital, Jeff Hennig of Bank of America Merrill Lynch, Gabriele Jansen of Vision Ventures, Helge Seetzen of TandemLaunch, and Peter Shannon of Firelake Capital Management participate in the Investor Panel at the May 2016 Embedded Vision Summit. This moderated panel discussion addresses emerging
“Democratizing Computer Vision Development: Lessons from the Video Game Industry,” a Presentation from WRNCH
Paul Kruszewski, President of WRNCH, presents the "Democratizing Computer Vision Development: Lessons from the Video Game Industry" tutorial at the May 2016 Embedded Vision Summit. Computer vision offers great promise: algorithms are maturing rapidly and processing power continues to grow by leaps and bounds. But today’s approach to computer vision software development – hiring a
May 2016 Embedded Vision Summit Proceedings
The Embedded Vision Summit was held on May 2-4, 2016 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations presented at the Summit are listed below. All of the slides from these presentations are included in… May 2016 Embedded Vision Summit
Deep Learning Use Cases for Computer Vision (Download)
Six Deep Learning-Enabled Vision Applications in Digital Media, Healthcare, Agriculture, Retail, Manufacturing, and Other Industries The enterprise applications for deep learning have only scratched the surface of their potential applicability and use cases. Because it is data agnostic, deep learning is poised to be used in almost every enterprise vertical… Deep Learning Use Cases for