Algorithms

May 2015 Embedded Vision Summit Technical Presentation: “3D from 2D: Theory, Implementation, and Applications of Structure from Motion,” Marco Jacobs, videantis

Marco Jacobs, Vice President of Marketing at videantis, presents the "3D from 2D: Theory, Implementation, and Applications of Structure from Motion" tutorial at the May 2015 Embedded Vision Summit. Structure from motion uses a unique combination of algorithms that extract depth information using a single 2D moving camera. Using a calibrated camera, feature detection, and […]

May 2015 Embedded Vision Summit Technical Presentation: “3D from 2D: Theory, Implementation, and Applications of Structure from Motion,” Marco Jacobs, videantis Read More +

May 2015 Embedded Vision Summit Technical Presentation: “Low-power Embedded Vision: A Face Tracker Case Study,” Pierre Paulin, Synopsys

Pierre Paulin, R&D Director for Embedded Vision at Synopsys, presents the "Low-power Embedded Vision: A Face Tracker Case Study" tutorial at the May 2015 Embedded Vision Summit. The ability to reliably detect and track individual objects or people has numerous applications, for example in the video-surveillance and home entertainment fields. While this has proven to

May 2015 Embedded Vision Summit Technical Presentation: “Low-power Embedded Vision: A Face Tracker Case Study,” Pierre Paulin, Synopsys Read More +

“Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation,” a Presentation From Synopsys

Bruno Lavigueur, Project Leader for Embedded Vision at Synopsys, presents the "Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation" tutorial at the May 2015 Embedded Vision Summit. Deep learning-based object detection using convolutional neural networks (CNN) has recently emerged as one of the leading approaches for achieving state-of-the-art detection accuracy for a wide range of

“Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation,” a Presentation From Synopsys Read More +

evsummit_logo

May 2015 Embedded Vision Summit Proceedings

The Embedded Vision Summit was held on May 12, 2015 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations presented at the Summit are listed below. All of the slides from these presentations are included in… May 2015 Embedded Vision Summit

May 2015 Embedded Vision Summit Proceedings Read More +

OpenCL Eases Development of Computer Vision Software for Heterogeneous Processors

OpenCL™, a maturing set of programming languages and APIs from the Khronos Group, enables software developers to efficiently harness the profusion of diverse processing resources in modern SoCs, in an abundance of applications including embedded vision. Computer scientists describe computer vision, the use of digital processing and intelligent algorithms to interpret meaning from still and

OpenCL Eases Development of Computer Vision Software for Heterogeneous Processors Read More +

OpenCLLogo_678x452

OpenCL Eases Development of Computer Vision Software for Heterogeneous Processors

OpenCL™, a maturing set of programming languages and APIs from the Khronos Group, enables software developers to efficiently harness the profusion of diverse processing resources in modern SoCs, in an abundance of applications including embedded vision. Computer scientists describe computer vision, the use of digital processing and intelligent algorithms to interpret meaning from still and

OpenCL Eases Development of Computer Vision Software for Heterogeneous Processors Read More +

Gaze Tracking Using CogniMem Technologies’ CM1K and a Freescale i.MX53

This demonstration, which pairs a Freescale i.MX Quick Start board and CogniMem Technologies CM1K evaluation module, showcases how to use your eyes (specifically where you are looking at any particular point in time) as a mouse. Translating where a customer is looking to actions on a screen, and using gaze tracking to electronically control objects

Gaze Tracking Using CogniMem Technologies’ CM1K and a Freescale i.MX53 Read More +

Adding Precise Finger Gesture Recognition Capabilities to the Microsoft Kinect

CogniMem’s Chris McCormick, application engineer, demonstrates how the addition of general-purpose and scalable pattern recognition can be used to bring enhanced gesture control to the Microsoft Kinect. Envisioned applications include augmenting or eliminating the TV remote control, using American Sign Language for direct text translation, and expanding the game-playing experience. To process even more gestures

Adding Precise Finger Gesture Recognition Capabilities to the Microsoft Kinect Read More +

“An Update on OpenVX and Other Vision-Related Standards,” A Presentation from Khronos

Elif Albuz, Manager of Vision Software at NVIDIA, delivers the presentation "Update on OpenVX and Other Khronos Standards" at the December 2014 Embedded Vision Alliance Member Meeting. Elif provides an update on the newly released OpenVX standard, and other vision-related standards in progress.

“An Update on OpenVX and Other Vision-Related Standards,” A Presentation from Khronos Read More +

“Keeping Brick and Mortar Relevant, A Look Inside Retail Analytics,” A Presentation from Prism Skylabs

Doug Johnston, Founder and Vice President of Technology at Prism Skylabs, delivers the presentation "Keeping Brick and Mortar Relevant: A Look Inside Prism Skylabs and Retail Analytics" at the December 2014 Embedded Vision Alliance Member Meeting. Doug explains how his firm is using vision to provide retailers with actionable intelligence based on consumer behavior.

“Keeping Brick and Mortar Relevant, A Look Inside Retail Analytics,” A Presentation from Prism Skylabs Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top