I'm excited to share that we¹ve selected our 2019 Vision Tank finalists! The Vision Tank is the Embedded Vision Summit's annual start-up competition, in which five computer vision start-up companies present their innovations to a panel of investors and industry experts. The winner gets a prize package worth more than $15,000–to say nothing of bragging rights of being the most innovative visual AI start-up at the Summit.
Here are this year’s finalists:
Blink.AI Technologies utilizes machine learning to enhance sensor performance, extending the range of what cameras can see and detect in the real world. Building upon proprietary deep learning techniques, the company has developed robust low-light video inference deployed on efficient low-power devices for camera-embedded systems.
Entropix provides better vision for computer vision. Its patented technology employs dual sensor cameras with AI and deep learning software to extract extreme levels of detail from video and still images for ultra-accurate intelligent video analytics. This patented computational resolution reconstruction supercharges video data analytics detection and classification.
Robotic Materials enables robotic components with human-like manipulation skills for the robotics industry. The company provides a sensing hand mechanism using tactile sensing, stereo vision, and high-performance embedded vision to mimic the tight integration of sensing, actuation, computation, and communication found in natural systems.
Strayos is a 3D visual AI platform using drone images to reduce cost and improve efficiency in job sites. Its software helps mining and quarry operators optimize the placement of drill holes and quantities of explosives and improve site productivity and safety by providing highly accurate survey data analytics.
Vyrill is focused on helping to understand and interpret the massive amount of user-generated video content (UGVC) on social media and the web and offers a proprietary AI-powered platform for UGVC discovery, analytics, licensing and content marketing to help brand marketers better understand their customers.
Editor-In-Chief, Embedded Vision Alliance
OBJECT DETECTION AND IDENTIFICATION
Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors
The work done by Alexander Berg, Associate Professor at the University of North Carolina at Chapel Hill and CTO of Shopagon, and his group in 2016 on single-shot object detection (SSD) reduced the computation cost for accurate detection of object categories to be in the same range as image classification, enabling deployment of general object detection at scale. Subsequent extensions add segmentation and improve accuracy, but still require many training examples in real-world contexts for each object category. In certain applications, it may be desirable to detect new objects or categories for which many training examples are not readily available. Berg's presentation considers two approaches to address this challenge. The first takes a small number of examples of objects not in context and composes them into scenes in order to construct training examples. The other approach learns to detect objects that are similar to a small number of target images provided during detection and does not requiring retraining the network for new targets.
Introduction to LiDAR for Machine Perception
LiDAR sensors use pulsed laser light to construct 3D representations of objects and terrain. Recently, interest in LiDAR has grown, for example for generating high-definition maps required for autonomous vehicles and other robotic systems. In addition, many systems use LiDAR for 3D object detection, classification and tracking. In this presentation, Mohammad Musa, the Founder and CEO of Deepen AI, explains how LiDAR sensors operate, compares various LiDAR implementations available today, and explores the pros and cons of LiDAR in comparison to other vision sensors.
VISION ALGORITHM FUNDAMENTALS
The Perspective Transform in Embedded Vision
This presentation from Shrinivas Gadkari, Design Engineering Director, and Aditya Joshi, Lead Design Engineer, both of Cadence, focuses on the perspective transform and its role in many state-of-the-art visual AI applications like video stabilization, high dynamic range (HDR) imaging and super resolution imaging. The perspective transform accurately recomputes an image as seen from a different camera position (perspective). Following an introduction to this transform, Gadkari and Joshi explain its applicability in applications which involve joint processing of multiple frames taken from slightly different camera positions. They then discuss considerations for efficient implementation of this transform on an embedded processor. Gadkari and Joshi highlight the computational challenge posed by the division operation involved in a canonical implementation of the perspective transform. Finally, they describe a division-free modification of the perspective transform which achieves a 3X reduction in processing cycles for typical use cases.
Implementing Image Pyramids Efficiently in Software
An image pyramid is a series of images, derived from a single original image, wherein each successive image is at a lower resolution than its predecessors. Image pyramids are widely used in computer vision, for example to enable detection of features at different scales. After a brief introduction to image pyramids and their uses in vision applications, Michael Stewart, Proprietor of Polymorphic Technologies, explores techniques for efficiently implementing image pyramids on various processor architectures, including CPUs and GPUs. He illustrates the use of fixed- and floating-point arithmetic, vectorization and parallelization to speed up image pyramid implementations. He also examines how memory caching approaches impact the performance of image pyramid code and discusses considerations for applications requiring real-time response or minimum latency. Also see the reference manual Arm's Guide to OpenCL Optimizing Pyramid.