|EMBEDDED VISION SUMMIT SPECIAL EDITION|
Only one week to go until the Embedded Vision Summit, the industry’s largest event for practical computer vision, attracting a global audience of over one thousand product creators, entrepreneurs and business decision-makers who are developing and using visual AI technology. The Summit has experienced exciting growth over the last few years, with 97% of 2018 Summit attendees reporting that they'd recommend the event to a colleague. It is the place to learn about the latest applications, techniques, technologies, and opportunities in visual AI and deep learning.
We're devoting this issue of Embedded Vision Insights to covering the exciting speakers, presentations and workshops we have planned at the Summit. The complete schedule is now published, and you should definitely take a look. Better yet, register now before it’s too late! This year’s event promises to be the best yet, with four full days featuring inspiring keynotes, 90+ business and technical presentation sessions, two full-day hands-on technical trainings and multiple Vision Technology workshops and seminars, plus the latest commercially available technology building blocks in the Vision Technology Showcase.
We’re honored to have two industry luminaries giving keynote presentations this year. Ramesh Raskar, an award-winning innovator with over 80 patents and the founder of the Camera Culture research group at the MIT Media Lab, will show us the work he and his team are doing in combining novel camera technologies and deep learning AI algorithms to deliver advanced imaging and visual perception in his presentation "Making the Invisible Visible: Within Our Bodies, the World Around Us, and Beyond". And popular past Summit speaker Pete Warden, Google Staff Research Engineer and lead developer on the company's TensorFlow Lite machine learning framework for mobile and embedded applications, will share his unique perspective on the state of the art and future of low-power, low-cost machine learning, highlighting some of the most advanced examples of current machine learning technology and applications, in his presentation "The Future of Computer Vision and Machine Learning is Tiny".
We’ll also be announcing our Vision Product of the Year Award winners at the Summit. These annual awards recognize the innovation and achievement of the industry’s leading technology, service and end-product companies who are enabling the next generation of practical applications for computer vision. And of course, you won’t want to miss the Vision Tank, our start-up competition where the five finalists pitch their products to our expert panel of judges—and the audience—to win the grand prize! This year’s finalists are BlinkAI Technologies, Entropix, Robotic Materials, Strayos and Vyrill.
To kick off the Summit, we're offering two day-long hands-on training classes. "Deep Learning for Computer Vision with TensorFlow 2.0," an update to last year's highly rated class, provides the hands-on knowledge you need to develop deep learning computer vision applications—both on embedded systems and in the cloud—with the latest version of TensorFlow, one of today’s most popular frameworks for deep learning. And in the new "Computer Vision Applications in OpenCV" class, you'll learn how to develop practical, deployable computer vision systems using OpenCV and Python; topics will include OpenCV basics, image alignment, panoramas, image classification, deep neural networks, object detection, object tracking and face recognition (also see instructor Satya Mallick's in-progress Kickstarter campaign to fund AI courses from OpenCV.org).
And once again, we’re dedicating the final day of the Summit to Vision Technology Workshops and Seminars, which are presented by experienced engineers from our Member companies and partners. Summit Premier sponsor Intel will offer both introductory and advanced versions of its "Intel Vision Technology and OpenVINO Toolkit" workshop, including coverage of deep learning algorithms and accelerators for smart video applications. The Khronos Group's workshop, "Hardware Acceleration for Machine Learning and Computer Vision through Open Standard APIs," will introduce attendees to key Khronos open standards, including OpenVX, NNEF, OpenCL and SYCL, and show how they can be applied to inferencing and vision acceleration. And in its seminar, "Navigating Intelligent Vision at the Edge," Synopsys will discuss the latest trends in artificial intelligence and computer vision, and how to use the latest embedded vision technologies to navigate your way from concept to successful silicon.
See you at the Summit!