Mohammad Rastegari, CTO of Xnor.ai, presents the “At the Edge of AI At the Edge: Ultra-efficient AI on Low-power Compute Platforms” tutorial at the May 2018 Embedded Vision Summit.
Improvements in deep learning models have increased the demand for AI in several domains. These models demand massive amounts of computation and memory, so current AI applications have to resort to cloud-based solutions. However, AI applications cannot scale via cloud solutions, and sending data over the cloud is not always desired for many reasons (e.g. privacy, bandwidth, …). Therefore, there is a significant demand for running AI models on edge devices. These devices often have limited compute and memory capacity, so porting deep learning algorithms to these platforms is extremely challenging.
In this presentation, Rastegari introduces Xnor.ai’s optimized software platforms, which enable deploying AI models on a variety of low-power compute platforms with extreme resource constraints. The company’s solution is rooted in the efficient design of deep neural networks using binary operations and network compression, along with optimization algorithms for training.