fbpx

Edge AI and Vision Insights: January 13, 2021 Edition

LETTER FROM THE EDITOR
Dear Colleague,2021 Embedded Vision Summit

The 100% virtual 2021 Embedded Vision Summit, the premier conference for innovators adding computer vision and visual AI to products, is coming May 25-27—and we’re excited about the program that’s taking shape! At the Summit, you’ll be able to:

  • See an amazing range of technology in action—we’re talking dozens upon dozens of leading-edge building-block technologies as well as applications enabled with computer vision, edge AI, and sensor data
  • Watch expert sessions on the most pressing topics in the industry from some of the brightest minds currently working with edge AI and vision
  • Connect with those VEPs—you know, Very Elusive People—like that potential building-block technology supplier, critical ecosystem partner, or technical expert you’ve been looking for
  • Keep building your skills with hands-on learning, tutorials and more!

Learn more and then register today with promo code SUPEREARLYBIRD21 to receive your 25%-off Super Early Bird Discount!

Also, consider presenting at the Summit. Speaking is a great opportunity to:

  • Share your expertise on practical computer vision and visual AI and be recognized as an authority on the subject by your peers
  • Increase your company’s visibility and reputation
  • Build your network and connect with new suppliers, customers and partners

Session proposals are due by February 3. Space is limited, so submit your proposal now before the agenda fills up. Visit the Summit website to learn more about the requirements, and to submit your proposal.


The Alliance is now accepting applications for the sixth annual Vision Tank start-up competition. Are you an early-stage start-up company developing a new product or service incorporating or enabling computer vision or visual AI? Do you want to raise awareness of your company and products with vision industry experts, investors and developers? The Vision Tank start-up competition offers early-stage companies a chance to present their new products to a panel of judges at the 2021 Embedded Vision Summit, in front of a live online audience.

Two awards are given out each year: the Judges’ Award and the Audience Choice Award. The winner of the Vision Tank Judges’ Award will receive a $5,000 cash prize, and both winners will receive a one-year membership in the Edge AI and Vision Alliance. All finalists will also get one-on-one advice from the judges, as well as valuable introductions to potential investors, customers, employees and suppliers. Applications are due by February 17; for more information, and to enter, please see the program page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

ASSESSING DEEP LEARNING ACCURACY

Accuracy: Beware of Red Herrings and Black SwansPerceive
Machine learning aims to construct models that are predictive: accurate even on data not used during training. But how should we assess accuracy? How can we avoid catastrophic errors due to black swans—rare, highly atypical events? Consider that, at 30 frames per second, video presents so many events that even “highly atypical” ones occur every day! How can we avoid overreacting to red herrings—coincidences in the training data that are irrelevant? After all, a model’s entire knowledge of the world is the data used in training. To build more trustworthy models, we must re-examine how to measure accuracy and how best to achieve it. This talk from Steve Teig, CEO of Perceive, challenges some widely held assumptions and offers some novel steps forward, occasionally livened by colorful, zoological metaphors.

Machine Learning for the Real World: What is Acceptable Accuracy, and How Can You Achieve It?Arm
The benefits of running machine learning at the edge are widely accepted, and today’s low-power edge devices are already showing great potential to run ML. But what constitutes acceptable accuracy when applied to real-world, real-time use cases? In this talk, Tim Hartley, Director of Product and Marketing at Arm, explores what constitutes acceptable detection accuracy for specific use cases, and how this can be measured. Looking at which ML models are meeting the challenges and which fall short, he focuses on how techniques like transfer learning can help fill the gaps when weaknesses in detection accuracy are found.

PROCESSING AT THE EDGE

Trends in Neural Network Topologies for Vision at the EdgeSynopsys
The widespread adoption of deep neural networks (DNNs) in embedded vision applications has increased the importance of creating DNN topologies that maximize accuracy while minimizing computation and memory requirements. This has led to accelerated innovation in DNN topologies. In this talk, Pierre Paulin, Director of R&D for Embedded Vision at Synopsys, summarizes the key trends in neural network topologies for embedded vision applications, highlighting techniques employed by widely used networks such as EfficientNet and MobileNet to boost both accuracy and efficiency. He also touches on other optimization methods—such as pruning, compression and layer fusion—that developers can use to further reduce the memory and computation demands of modern DNNs.

Making Edge AI Inference Programming Easier and FlexibleTexas Instruments
Deploying an AI model at the edge doesn’t have to be challenging—but it often is. Embedded processing vendors have unique sets of software tools for deploying models. It takes time and investment to learn to use proprietary tools and to optimize the edge implementation to achieve your desired performance. While embedded vendors are providing proprietary tools for model deployment, the open source community is also advancing to standardize the model deployment process and make it hardware agnostic. Texas Instruments has adopted open source software frameworks to make model deployment easier and more flexible. In this talk from product marketing engineer Manisha Agrawal, you will learn about the struggles developers face when deploying models for inference on embedded processors and how TI addresses these critical software development challenges. You will also discover how TI enables faster time-to-market using a flexible open source development approach without the need to compromise performance, accuracy or power requirements.

UPCOMING INDUSTRY EVENTS

Horizon Robotics Webinar – Advancing the AI Processing Architecture for the Software-Defined Car: January 21, 2021, 9:00 am PT

Edge AI and Vision Alliance Webinar – Adding Visual AI or Computer Vision to Your Embedded System: Key Things Every Engineering Manager Should Know: February 9, 2021, 9:00 am PT

Vision Components Webinar – Adding Embedded Cameras to Your Next Industrial Product Design: February 16, 2021, 9:00 am PT

Edge Impulse Webinar – How to Rapidly Build Robust Data-driven Embedded Machine Learning Applications: February 25, 2021, 9:00 am PT

More Events

FEATURED NEWS

Synaptics Expands into Low Power Edge AI Applications with Its New Katana Platform

Intel Introduces RealSense ID Facial Authentication

IDS Makes Artificial Intelligence Available to Factory Automation via Its Latest Software Updates

Basler Presents Its New Vision-optimized 1GigE and 10GigE Interface Cards

Codeplay Software and Partners Bring Open Standards Programming to RISC-V for HPC and AI Systems

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Horizon Robotics Journey 2 (Best Automotive Solution)Horizon Robotics
Horizon Robotics’ Journey 2 is the 2020 Edge AI and Vision Product of the Year Award Winner in the Automotive Solutions category. Journey 2 is Horizon’s open AI compute solution, focused on ADAS, intelligent cockpit and autonomous driving edge processing. The Journey 2 solution includes a domain-specific deep learning automotive processor, the Horizon AI toolchain and Horizon’s perception algorithms. It enables OEMs and Tier 1s to create advanced designs at high energy efficiency and high cost effectiveness, while delivering high-performance inference results.

Please see here for more information on Horizon Robotics and its Journey 2. The Edge AI and Vision Product of the Year Awards (an expansion of previous years’ Vision Product of the Year Awards) celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes your leadership in edge AI and computer vision as evaluated by independent industry experts. The Edge AI and Vision Alliance is now accepting applications for the 2021 Awards competition; for more information and to enter. please see the program page.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top