fbpx

Edge AI and Vision Insights: August 19, 2020 Edition

LETTER FROM THE EDITOR
Dear Colleague,Free Webinar

Tomorrow, Thursday, August 20, 2020, Jeff Bier, founder of the Edge AI and Vision Alliance, will present two sessions of a free one-hour webinar, “Key Trends in the Deployment of Edge AI and Computer Vision”. The first session will take place at 9 am PT (noon ET), timed for attendees in Europe and the Americas. The second session, at 6 pm PT (9 am China Standard Time on August 21), is timed for attendees in Asia. With so much happening in edge AI and computer vision applications and technology, and happening so fast, it can be difficult to see the big picture. This webinar from the Alliance will examine the four most important trends that are fueling the proliferation of edge AI and vision applications and influencing the future of the industry:

  • Deep learning – including a focus on the key challenges of obtaining sufficient training data and managing workflows.
  • Streamlining edge development – thanks to cloud computing and higher levels of abstraction in both hardware and software, it is now much easier than ever before for developers to implement AI and vision capabilities in edge devices.
  • Fast, cheap, energy-efficient processors – massive investment in specialized processors is paying off, delivering 1000x improvements in performance and efficiency, enabling AI and vision to be deployed even in very cost- and energy-constrained applications at the edge.
  • New sensors – the introduction of new 3D optical, thermal, neuromorphic and other advanced sensor technologies into high-volume applications like mobile phones and automobiles has catalyzed a dramatic acceleration in innovation, collapsing the cost and complexity of implementing visual perception.

Bier will explain what’s fueling each of these key trends, and will highlight key implications for technology suppliers, solution developers and end-users. He will also provide technology and application examples illustrating each of these trends, including spotlighting the winners of the Alliance’s 2020 Vision Product of the Year Awards. A question-and-answer session will follow the presentation. See here for more information, including online registration.


Embedded Vision SummitThis year’s Embedded Vision Summit, taking place online September 15-25, features an exciting lineup jam-packed with great talks, exhibits, demos and plenty of opportunities to connect with some of the best in the business. Our keynote speaker David Patterson, for example, is a true visionary in computer science and engineering and a prolific innovator, from co-inventing the RISC architecture to his leadership on the Google TPU processor used for accelerating machine learning workloads. Patterson’s talk, “A New Golden Age for Computer Architecture: Processor Innovation to Enable Ubiquitous AI,” is a must-see for anyone creating machine-learning systems or processors, and you can also listen in and pick his brain in a follow-up Q&A session.

Bill Pearson, Vice President of Intel’s Internet of Things Group, will also give a General Session presentation at the Summit, “Streamline, Simplify, and Solve for the Edge of the Future,” on the most important challenges facing edge AI developers today, and Intel’s vision for how the industry must evolve to reach its true potential. Check out these and other already announced talks for yourself. (We’re adding new talks daily, and it won’t be long before we’ll have nearly 100 talks across four tracks.) And then be sure to register today!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEEP LEARNING INFERENCE BENCHMARKING

An Industry Standard Performance Benchmark Suite for Machine LearningMLPerf
In this presentation, Christine Cheng, co-chair of the inference benchmark working group at MLPerf and a senior machine learning optimization engineer at Intel, explains how MLPerf’s inference benchmark suite for evaluating processor performance works and how it is evolving.

 

 

Machine Learning On Edge Devices: A Benchmark ReportTryolabs
In this article, Tryolabs evaluates five novel edge devices, using different frameworks and models, to see which combinations perform best. In particular, the company focuses on performance outcomes for machine learning on the edge. Why? While we’ve been processing data first in data centers and then in the cloud, these solutions are not suitable for highly demanding tasks that collect large volumes of data in the field, according to the company. Network capacity and speed are being pushed to the limit, and new solutions are required, thus ushering in the era of edge computing and edge devices.

DEEP LEARNING MODEL AND ARCHITECTURE OPTIMIZATION

Once-for-All DNNs: Simplifying Design of Efficient Models for Diverse HardwareMIT
In this presentation, Song Han, Associate Professor in the Department of Electrical Engineering and Computer Science at MIT, shares his group’s latest research on the automated generation of deep neural network architectures that execute efficiently across diverse hardware targets.

 

The Next Phase of Deep Learning: Neural Architecture Learning (the Automatic Discovering of Neural Wirings) Leads to Optimized Computer Vision ModelsXnor.ai
In this article, Mohammad Rastegari, former CTO at Xnor.ai (subsequently acquired by Apple, where he is now a Senior AI/ML Technical Leader) and a Research Scientist at the Allen Institute for Artificial Intelligence, presents the trends and stages of AI models in the field of computer vision. The primary motivation behind deep learning’s to-date success, he notes, is that it doesn’t rely solely on human intuition in building data (visual, textual, audio, etc.) model representations. Instead, the neural network architecture primarily learns the representations on its own via repetitive training. Taking this concept to the next level, Rastegari similarly believes that we should not fundamentally rely on human intuition to build the underlying neural network architectures, either. Instead, he postulates, we should let the computer iteratively develop and optimize the network architecture by itself for each application.

UPCOMING INDUSTRY EVENTS

Edge AI and Vision Alliance Webinar – Key Trends in the Deployment of Edge AI and Computer Vision: August 20, 2020, 9:00 am PT and 6:00 pm PT

Embedded Vision Summit: September 15-25, 2020

More Events

FEATURED NEWS

Intel Delivers Advances Across Multiple Process and Processor Technologies, Powering Its Product Roadmap

An Upcoming Webinar from PathPartner Explores the Challenges Faced in Developing Facial Recognition Technology

MediaTek Announces Dimensity 720, its Newest Chip For Premium 5G Experiences on Mid-Tier Smartphones

Vision Components Ships a MIPI Camera and Driver for the NVIDIA Jetson Nano Developer Kit

Qualcomm Announces the Snapdragon 865 Plus 5G Mobile Platform

More News

VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Intel DevCloud for the Edge (Best Developer Tool)Intel
Intel’s DevCloud for the Edge is the 2020 Vision Product of the Year Award winner in the Developer Tools category. The Intel DevCloud for the Edge allows you to virtually prototype and experiment with AI workloads for computer vision on the latest Intel edge inferencing hardware, with no hardware setup required since the code executes directly within the web browser. You can test the performance of your models using the Intel Distribution of OpenVINO Toolkit and combinations of CPUs, GPUs, VPUs and FPGAs. The site also contains a series of tutorials and examples preloaded with everything needed to quickly get started, including trained models, sample data and executable code from the Intel Distribution of OpenVINO Toolkit as well as other deep learning tools.

Please see here for more information on Intel and its DevCloud for the Edge. The Vision Product of the Year Awards are open to Member companies of the Edge AI and Vision Alliance and celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes leadership in computer vision as evaluated by independent industry experts.

EMBEDDED VISION SUMMIT
MEDIA PARTNER SHOWCASE

Embedded Computing Design (OpenSystems Media)Embedded Computing Design
Attend the IoT Security Webcast Series “Making Security Part of Your Embedded Development DNA,” running now through September 2. Click here for more information and to register.

 

Silicon Valley RoboticsSilicon Valley Robotics
Silicon Valley Robotics supports innovation in and the commercialization of robotics technologies.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top