Samer Hijazi, Deep Learning Engineering Group Director at Cadence, presents the "Techniques to Reduce Power Consumption in Embedded DNN Implementations" tutorial at the May 2017 Embedded Vision Summit.
Deep learning is becoming the most widely used technique for computer vision and pattern recognition. This rapid adoption is primarily driven by the outstanding effectiveness deep learning has achieved on many fronts. However, the high computational requirements of deep learning algorithms typically drives power requirements to levels that are not reasonable for embedded applications. The biggest contributor to the high power consumption of deep neural network is the huge number of multiplications per pixel. In this talk, Hijazi presents a three-legged approach to solving this problem, by optimizing the network architecture, optimizing the problem definition, and minimizing word widths.