fbpx

Tools

“Embedding Programmable DNNs in Low-Power SoCs,” a Presentation from Xperi

Steve Teig, Chief Technology Officer at Xperi, presents the “Embedding Programmable DNNs in Low-Power SoCs” tutorial at the May 2018 Embedded Vision Summit. This talk presents the latest generation of FotoNation’s (a core business unit of Xperi) Image Processing Unit (IPU)—an embedded AI enabled image processing engine that can be customized and adapted to suit […]

“Embedding Programmable DNNs in Low-Power SoCs,” a Presentation from Xperi Read More +

“Creating a Computationally Efficient Embedded CNN Face Recognizer,” a Presentation from PathPartner Technology

Praveen G.B., Technical Lead at PathPartner Technology, presents the “Creating a Computationally Efficient Embedded CNN Face Recognizer” tutorial at the May 2018 Embedded Vision Summit. Face recognition systems have made great progress thanks to availability of data, deep learning algorithms and better image sensors. Face recognition systems should be tolerant of variations in illumination, pose

“Creating a Computationally Efficient Embedded CNN Face Recognizer,” a Presentation from PathPartner Technology Read More +

“Optimize Performance: Start Your Algorithm Development With the Imaging Subsystem,” a Presentation from Twisthink

Ryan Johnson, lead engineer at Twisthink, presents the “Optimize Performance: Start Your Algorithm Development With the Imaging Subsystem” tutorial at the May 2018 Embedded Vision Summit. Image sensor and algorithm performance are rapidly increasing, and software and hardware development tools are making embedded vision systems easier to develop. Even with these advancements, optimizing vision-based detection

“Optimize Performance: Start Your Algorithm Development With the Imaging Subsystem,” a Presentation from Twisthink Read More +

“Getting More from Your Datasets: Data Augmentation, Annotation and Generative Techniques,” a Presentation from Xperi

Peter Corcoran, co-founder of FotoNation (now a core business unit of Xperi) and lead principle investigator and director of C3Imaging (a research partnership between Xperi and the National University of Ireland, Galway), presents the “Getting More from Your Datasets: Data Augmentation, Annotation and Generative Techniques” tutorial at the May 2018 Embedded Vision Summit. Deep learning

“Getting More from Your Datasets: Data Augmentation, Annotation and Generative Techniques,” a Presentation from Xperi Read More +

“Deep Quantization for Energy Efficient Inference at the Edge,” a Presentation from Lattice Semiconductor

Hoon Choi, Senior Director of Design Engineering at Lattice Semiconductor, presents the “Deep Quantization for Energy Efficient Inference at the Edge” tutorial at the May 2018 Embedded Vision Summit. Intelligence at the edge is different from intelligence in the cloud in terms of requirements for energy, cost, accuracy and latency. Due to limits on battery

“Deep Quantization for Energy Efficient Inference at the Edge,” a Presentation from Lattice Semiconductor Read More +

“Real-time Calibration for Stereo Cameras Using Machine Learning,” a Presentation from Lucid VR

Sheldon Fernandes, Senior Software and Algorithms Engineer at Lucid VR, presents the “Real-time Calibration for Stereo Cameras Using Machine Learning” tutorial at the May 2018 Embedded Vision Summit. Calibration involves capturing raw data and processing it to get useful information about a camera’s properties. Calibration is essential to ensure that a camera’s output is as

“Real-time Calibration for Stereo Cameras Using Machine Learning,” a Presentation from Lucid VR Read More +

“Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision,” a Keynote Presentation from Dr. Takeo Kanade

Dr. Takeo Kanade, U.A. and Helen Whitaker Professor at Carnegie Mellon University, presents the “Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision” tutorial at the May 2018 Embedded Vision Summit. In this keynote presentation, Dr. Kanade shares his experiences and lessons learned in developing a vast range of

“Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision,” a Keynote Presentation from Dr. Takeo Kanade Read More +

“Even Faster CNNs: Exploring the New Class of Winograd Algorithms,” a Presentation from Arm

Gian Marco Iodice, Senior Software Engineer in the Machine Learning Group at Arm, presents the “Even Faster CNNs: Exploring the New Class of Winograd Algorithms” tutorial at the May 2018 Embedded Vision Summit. Over the past decade, deep learning networks have revolutionized the task of classification and recognition in a broad area of applications. Deeper

“Even Faster CNNs: Exploring the New Class of Winograd Algorithms,” a Presentation from Arm Read More +

“A Physics-based Approach to Removing Shadows and Shading in Real Time,” a Presentation from Tandent Vision Science

Bruce Maxwell, Director of Research at Tandent Vision Science, presents the “A Physics-based Approach to Removing Shadows and Shading in Real Time” tutorial at the May 2018 Embedded Vision Summit. Shadows cast on ground surfaces can create false features and modify the color and appearance of real features, masking important information used by autonomous vehicles,

“A Physics-based Approach to Removing Shadows and Shading in Real Time,” a Presentation from Tandent Vision Science Read More +

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University

Lina Karam, Professor and Computer Engineering Director at Arizona State University, presents the “Generative Sensing: Reliable Recognition from Unreliable Sensor Data” tutorial at the May 2018 Embedded Vision Summit. While deep neural networks (DNNs) perform on par with – or better than – humans on pristine high-resolution images, DNN performance is significantly worse than human

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top