Bier will explain what's fueling each of these key trends, and highlight key implications for technology suppliers, solution developers and end-users, especially those in the automotive industry.
Drive World with ESC is bringing 2,500 electrical and mechanical engineers to Silicon Valley for an inaugural, cross-discipline event where you can find the foundational education, networking, career guidance, and supplier connections needed to keep pace with the automotive and electronics industries. Get a free expo pass and explore innovations in autonomous, hardware, software, sensors, security, connectivity and more, or opt for deeper technical training at the two conferences covering smart mobility and embedded systems. For more information and to register, please see the event page.
The Alliance is in search of a Conference and Events Coordinator to join our team. We are seeking an individual fluent in written and spoken Chinese and English who has experience doing business in China and North America. This position will support the planning and execution of an annual corporate technology conference and trade show in Shenzhen, China in December 2019. Key responsibilities will include ensuring effective communication and collaboration between U.S.- and China-based teams, reviewing Chinese written materials, monitoring project schedules, and helping to ensure tasks are completed when needed. For more information and to apply, please see the job posting on the Alliance website.
Editor-In-Chief, Embedded Vision Alliance
INDUSTRY STANDARDS-BASED SOFTWARE DEVELOPMENT
APIs for Accelerating Vision and Inferencing: An Industry Overview of Options and Trade-offs
The landscape of SDKs, APIs and file formats for accelerating inferencing and vision applications continues to evolve rapidly. Low-level compute APIs, such as OpenCL, Vulkan and CUDA are being used to accelerate inferencing engines such as OpenVX, CoreML, NNAPI and TensorRT, being fed by neural network file formats such as NNEF and ONNX. Some of these APIs, like OpenCV, are vision-specific, while others, like OpenCL, are general-purpose. Some engines, like CoreML and TensorRT, are supplier-specific, while others such as OpenVX, are open standards that any supplier can adopt. Which ones should you use for your project? Neil Trevett, President of the Khronos Group and Vice President at NVIDIA, answers these and other questions in this presentation.
Portable Performance via the OpenVX Computer Vision Library: Case Studies
OpenVX is a state-of-the-art open API standard for accelerating applications using computer vision and machine learning. The API and its conformance tests enable applications to leverage highly specialized features of hardware platforms while still retaining portability of application code across a wide range of architectures. This talk from Frank Brill, Design Engineering Director at Cadence, uses concrete examples on real implementations to demonstrate the performance portability of OpenVX. Example applications written using OpenVX are described that run on platforms developed by Cadence Design Systems, Texas Instruments, Advanced Micro Devices and Axis Communications. Benchmarks demonstrate performance gains that would otherwise only be achievable via hardware-specific code optimizations. The talk also provides an update on the new features of the latest version of the OpenVX API, including support for a cross-platform neural network inferencing engine standard using a combination of OpenVX and Khronos’ Neural Network Exchange Format.
SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM)
Commercial Grade SLAM Frameworks for Indoor and Outdoor Applications
SLAM is an essential technology for any device that requires an understanding of its location and orientation in the physical world. In this talk, John Williams, CTO and co-founder of Kudan, briefly introduces SLAM and explains why it is superior to alternative approaches. He explores the different types of sensors (including cameras, IMUs, LiDAR, GPS, etc.) that can be used for SLAM, and how they can be combined for maximum benefit. Williams then explains the advantages of accelerating the execution of SLAM algorithms, and approaches for doing so on various types of embedded and cloud processors. Finally, he describes Kudan’s licensable SLAM technology and customization services, and illustrates the practical use of this technology in a variety of real-world products and environments, including cars, robots, drones and AR/VR devices.
Deploying Visual SLAM in Low-power Devices
SLAM technology has been evolving for quite some time, including visual SLAM, which relies primarily on image data. But implementing fast, accurate visual SLAM in embedded devices has been challenging due to high compute and precision requirements. Recent improvements in embedded processors enable deployment of visual SLAM in low-cost, low-power, mass-market systems, but implementing SLAM on such platforms can be challenging. In this talk, Ben Weiss, Customer Solutions Engineer in the CSG Group at CEVA, explores the current state of visual SLAM algorithms and shows how CEVA processors and software enable easy migration of SLAM algorithms from research to cost- and power-optimized production systems.