Robotics Applications for Embedded Vision

“Fundamentals of Monocular SLAM,” a Presentation from Cadence

Shrinivas Gadkari, Design Engineering Director at Cadence, presents the "Fundamentals of Monocular SLAM" tutorial at the May 2019 Embedded Vision Summit. Simultaneous Localization and Mapping (SLAM) refers to a class of algorithms that enables a device with one or more cameras and/or other sensors to create an accurate map of its surroundings, to determine the

Read More »

“Applied Depth Sensing with Intel RealSense,” a Presentation from Intel

Sergey Dorodnicov, Software Architect at Intel, presents the "Applied Depth Sensing with Intel RealSense" tutorial at the May 2019 Embedded Vision Summit. As robust depth cameras become more affordable, many new products will benefit from true 3D vision. This presentation highlights the benefits of depth sensing for tasks such as autonomous navigation, collision avoidance and

Read More »

“Visual AI Enables Autonomous Security,” an Interview with Knightscope

William "Bill" Santana Li, Co-founder, Chairman and CEO of Knightscope, talks with Vin Ratford, Executive Director of the Embedded Vision Alliance, for the "Visual AI Enables Autonomous Security" interview at the May 2019 Embedded Vision Summit. Knightscope, a physical security technologies company based in Silicon Valley, develops and sells a line of autonomous robots that

Read More »

May 2019 Embedded Vision Summit Slides

The Embedded Vision Summit was held on May 20-23, 2019 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in PDF form. To download the

Read More »
Figure6

Multi-sensor Fusion for Robust Device Autonomy

While visible light image sensors may be the baseline "one sensor to rule them all" included in all autonomous system designs, they're not necessarily a sole panacea. By combining them with other sensor technologies: "Situational awareness" sensors; standard and high-resolution radar, LiDAR, infrared and UV, ultrasound and sonar, etc., and "Positional awareness" sensors such as

Read More »

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot

Mario Munich, Senior Vice President of Technology at iRobot, presents the "Roomba 980: Computer Vision Meets Consumer Robotics" tutorial at the May 2018 Embedded Vision Summit. In 2015, iRobot launched the Roomba 980, introducing intelligent visual navigation to its successful line of vacuum cleaning robots. The availability of affordable electro-mechanical components, powerful embedded microprocessors and

Read More »

May 2018 Embedded Vision Summit Slides

The Embedded Vision Summit was held on May 21-24, 2018 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in PDF form. To download the

Read More »
evsummit_logo

May 2017 Embedded Vision Summit Slides

The Embedded Vision Summit was held on May 1-3, 2017 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in PDF form. To download the

Read More »

Vision Processing Opportunities in Drones

UAVs (unmanned aerial vehicles), commonly known as drones, are a rapidly growing market and increasingly leverage embedded vision technology for digital video stabilization, autonomous navigation, and terrain analysis, among other functions. This article reviews drone market sizes and trends, and then discusses embedded vision technology applications in drones, such as image quality optimization, autonomous navigation,

Read More »
evsummit_logo

May 2016 Embedded Vision Summit Proceedings

The Embedded Vision Summit was held on May 2-4, 2016 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations presented at the Summit are listed below. All of the slides from these presentations are included in PDF form. To download the

Read More »
Tractica-Logo-e1431719018493

Deep Learning Use Cases for Computer Vision (Download)

Six Deep Learning-Enabled Vision Applications in Digital Media, Healthcare, Agriculture, Retail, Manufacturing, and Other Industries The enterprise applications for deep learning have only scratched the surface of their potential applicability and use cases.  Because it is data agnostic, deep learning is poised to be used in almost every enterprise vertical market, including agriculture, media, manufacturing,

Read More »
Figure1b

Visual Intelligence Gives Robotic Systems Spatial Sense

This article is an expanded version of one originally published at EE Times' Embedded.com Design Line. It is reprinted here with the permission of EE Times. In order for robots to meaningfully interact with objects around them as well as move about their environments, they must be able to see and discern their surroundings. Cost-effective

Read More »
logo_2020

May 18 - 21, Santa Clara, California

The preeminent event for practical, deployable computer vision and visual AI, for product creators who want to bring visual intelligence to products.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top //