fbpx

Entertainment

Facial Analysis Delivers Diverse Vision Processing Capabilities

Computers can learn a lot about a person from their face – even if they don’t uniquely identify that person. Assessments of age range, gender, ethnicity, gaze direction, attention span, emotional state and other attributes are all now possible at real-time speeds, via advanced algorithms running on cost-effective hardware. This article provides an overview of […]

Facial Analysis Delivers Diverse Vision Processing Capabilities Read More +

Vision Processing Opportunities in Virtual Reality

VR (virtual reality) systems are beginning to incorporate practical computer vision techniques, dramatically improving the user experience as well as reducing system cost. This article provides an overview of embedded vision opportunities in virtual reality systems, such as environmental mapping, gesture interface, and eye tracking, along with implementation details. It also introduces an industry alliance

Vision Processing Opportunities in Virtual Reality Read More +

“What’s Hot in Embedded Vision for Investors?,” an Embedded Vision Summit Panel Discussion

Jeff Bier of the Embedded Vision Alliance (moderator), Don Faria of Intel Capital, Jeff Hennig of Bank of America Merrill Lynch, Gabriele Jansen of Vision Ventures, Helge Seetzen of TandemLaunch, and Peter Shannon of Firelake Capital Management participate in the Investor Panel at the May 2016 Embedded Vision Summit. This moderated panel discussion addresses emerging

“What’s Hot in Embedded Vision for Investors?,” an Embedded Vision Summit Panel Discussion Read More +

“Democratizing Computer Vision Development: Lessons from the Video Game Industry,” a Presentation from WRNCH

Paul Kruszewski, President of WRNCH, presents the "Democratizing Computer Vision Development: Lessons from the Video Game Industry" tutorial at the May 2016 Embedded Vision Summit. Computer vision offers great promise: algorithms are maturing rapidly and processing power continues to grow by leaps and bounds. But today’s approach to computer vision software development – hiring a

“Democratizing Computer Vision Development: Lessons from the Video Game Industry,” a Presentation from WRNCH Read More +

evsummit_logo

May 2016 Embedded Vision Summit Proceedings

The Embedded Vision Summit was held on May 2-4, 2016 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations presented at the Summit are listed below. All of the slides from these presentations are included in… May 2016 Embedded Vision Summit

May 2016 Embedded Vision Summit Proceedings Read More +

Tractica-Logo-e1431719018493

Deep Learning Use Cases for Computer Vision (Download)

Six Deep Learning-Enabled Vision Applications in Digital Media, Healthcare, Agriculture, Retail, Manufacturing, and Other Industries The enterprise applications for deep learning have only scratched the surface of their potential applicability and use cases.  Because it is data agnostic, deep learning is poised to be used in almost every enterprise vertical… Deep Learning Use Cases for

Deep Learning Use Cases for Computer Vision (Download) Read More +

Gaze Tracking Using CogniMem Technologies’ CM1K and a Freescale i.MX53

This demonstration, which pairs a Freescale i.MX Quick Start board and CogniMem Technologies CM1K evaluation module, showcases how to use your eyes (specifically where you are looking at any particular point in time) as a mouse. Translating where a customer is looking to actions on a screen, and using gaze tracking to electronically control objects

Gaze Tracking Using CogniMem Technologies’ CM1K and a Freescale i.MX53 Read More +

Adding Precise Finger Gesture Recognition Capabilities to the Microsoft Kinect

CogniMem’s Chris McCormick, application engineer, demonstrates how the addition of general-purpose and scalable pattern recognition can be used to bring enhanced gesture control to the Microsoft Kinect. Envisioned applications include augmenting or eliminating the TV remote control, using American Sign Language for direct text translation, and expanding the game-playing experience. To process even more gestures

Adding Precise Finger Gesture Recognition Capabilities to the Microsoft Kinect Read More +

October 2013 Embedded Vision Summit Technical Presentation: “Vision-Based Gesture User Interfaces,” Francis MacDougall, Qualcomm

Francis MacDougall, Senior Director of Technology at Qualcomm, presents the "Vision-Based Gesture User Interfaces" tutorial within the "Vision Applications" technical session at the October 2013 Embedded Vision Summit East. MacDougall explains how gestures fit into the spectrum of advanced user interface options, compares and contrasts the various 2-D and 3-D technologies (vision and other) available

October 2013 Embedded Vision Summit Technical Presentation: “Vision-Based Gesture User Interfaces,” Francis MacDougall, Qualcomm Read More +

October 2013 Embedded Vision Summit Technical Presentation: “Better Image Understanding Through Better Sensor Understanding,” Michael Tusch, Apical

Michael Tusch, Founder and CEO of Apical Imaging, presents the "Better Image Understanding Through Better Sensor Understanding" tutorial within the "Front-End Image Processing for Vision Applications" technical session at the October 2013 Embedded Vision Summit East. One of the main barriers to widespread use of embedded vision is its reliability. For example, systems which detect

October 2013 Embedded Vision Summit Technical Presentation: “Better Image Understanding Through Better Sensor Understanding,” Michael Tusch, Apical Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top