David Patterson, UC Berkeley professor of the graduate school, a Google distinguished engineer and the RISC-V Foundation Vice-Chair, talks with Jeff Bier, Founder of the Edge AI and Vision Alliance, for the “Perspective On the Past, Present, and Future of Processor Design” interview at the September 2020 Embedded Vision Summit. See here for Patterson’s keynote at the Summit, for which this interview is an extended Q&A session.
Paradoxically, processors today are both a key enabler of and a painful obstacle to the widespread use of AI applications. Despite big recent advances in machine learning (ML) processors, many people creating ML algorithms and applications still need much better processors to make their ideas practical, affordable and scalable. What will it take to bring processors to the next level, so that ML-based solutions can be deployed widely? Uniquely qualified to answer these questions is keynote speaker and Turing Award winner David Patterson.
Patterson shares his perspective on the past, present, and future of processor design, highlighting key challenges, lessons learned, and the emergence of machine learning as a key driver of processor innovation. Using lessons learned from an earlier revolution in processor architecture, the RISC revolution, Patterson explains why today, the most promising direction in processor design is domain-specific architectures (DSAs) — processors that are optimized for specific types of workloads. To illustrate the concepts and advantages of DSAs, Patterson examines Google’s Tensor Processing Unit (TPU), one of the earliest DSAs to be widely deployed for machine learning applications.