fbpx

Edge AI and Vision Insights: October 21, 2020 Edition

LETTER FROM THE EDITOR
Dear Colleague,Computer Vision Developer Survey

Many of the most important suppliers of building-block technologies that enable computer vision and visual AI applications use the results of the Edge AI and Vision Alliance’s annual Computer Vision Developer Survey to guide their priorities. This is our seventh year conducting the survey and we want to make sure we have your input in this year’s results so that your perspective can help guide the focus of technology suppliers in the coming years.

We share the results from the Computer Vision Developer Survey at Edge AI and Vision Alliance events and in white papers and presentations made available throughout the year on the Edge AI and Vision Alliance webpage. Please help us collect and share the best possible data by completing this short survey. (It typically takes less than 15 minutes to complete.) If you would like to see some of the results from previous years, check out this white paper (with highlights from the previous iteration of the survey).

We realize that we are asking you to take time out of your schedule to share your perspective. To show our thanks, we will send you a coupon for $50 off the price of a two-day Embedded Vision Summit ticket (to be sent when registration opens). In addition, the first 100 people to complete the survey will receive an Amazon gift card for $25. Thank you in advance for your perspective! Fill out the survey.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

MEDICAL VISION AND IMAGING APPLICATIONS

From Mobility to Medicine: Vision Enables the Next Generation of InnovationDean Kame
In this Embedded Vision Summit keynote presentation, legendary inventor Dean Kamen explains why he believes the time is right for computer vision to be used everywhere. Feedback control systems are central to Kamen’s work. In his view, computer vision has now advanced to the point where it can serve as a ubiquitous, versatile sensor enabling feedback control in countless applications. Eventually, says Kamen, embedded vision sensors will be as common as simple microcontrollers or mechanical sensors are today.

In perhaps his most ambitious initiative ever, Kamen explains how his Advanced Regenerative Manufacturing Institute plans to enable the large-scale manufacturing of engineered tissues and tissue-related technologies, with the eventual goal of mass-producing replacement organs for humans. He expects computer vision to also play a key role here, enabling monitoring and feedback control of tissue-growing processes without requiring physical contact with tissue.

Also see Kamen’s overview of DEKA Research’s current and upcoming computer vision programs, along with the Kamen-founded FIRST Robotics demonstration of its students’ designs.

Semiconductor Technology is a Key Enabler for Medical ImagingYole Développement
The medical imaging equipment market was estimated to be worth $38B in 2019, according to Yole Développement in the firm’s new market research report, and is forecasted to grow to $52B in 2025 at a 5.3% compound annual growth rate (CAGR). The medical imaging industry is profiting from global trends in the development of semiconductor technology. For example, the detector market for medical imaging equipment, estimated at $4.3B in 2019, is forecasted to grow to $6.6B in 2025 at a CAGR of 7.3%.

AI AND VISION FOR AUTONOMOUS ROBOTICS

Bringing Modern AI to Robots and Other Autonomous MachinesNVIDIA
A series of technical articles published on NVIDIA’s developer blog provide detailed implementation examples (complete with numerous code samples) for leveraging the company’s Isaac SDK in creating autonomous robots and other devices. See, for example, the recently published writeups “Developing Robotics Applications in Python” and “Enhancing Robotic Applications with the 3D Object Pose Estimation Pipeline.” Also see the company’s just-announced $59 Jetson Nano 2GB AI and robotics starter kit for students, educators, and hobbyists.

Applied Depth Sensing for Autonomous DevicesIntel
As robust depth cameras become more affordable, many new products will benefit from true 3D vision. This presentation from Sergey Dorodnicov, Software Architect at Intel, highlights the benefits of depth sensing for tasks such as autonomous navigation, collision avoidance and object detection in robots and drones. Dorodnicov explores a fully functional SLAM pipeline built using free and open source software components and an off-the-shelf Intel RealSense D435i depth camera, and shows how it performs for real-time environment mapping and tracking. Also see Intel’s demonstration of RealSense cameras used for SLAM and obstacle avoidance, enabling visual navigation of wheeled autonomous robots.

FEATURED NEWS

Hailo Announces M.2 and Mini PCIe AI Acceleration Modules to Enhance the Performance of Edge Devices

Imagination Technologies Launches Its IMG B-Series GPU IP: Doing More with Multi-core

OmniVision Technologies Unveils a 1.0 Micron 64MP Image Sensor With a Large Optical Format for Low Light Performance in High End Mobile Phones

Allegro DVT Releases a Family of 8K Multi-format Video Decoding IP

Intel’s RealSense Dimensional Weight Software Measures Packages and Pallets at the Speed of Light

More News

VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Horizon Robotics Journey 2 (Best Automotive Solution)Horizon Robotics
Horizon Robotics’ Journey 2 is the 2020 Vision Product of the Year Award Winner in the Automotive Solutions category. Journey 2 is Horizon’s open AI compute solution, focused on ADAS, intelligent cockpit and autonomous driving edge processing. The Journey 2 solution includes a domain-specific deep learning automotive processor, the Horizon AI toolchain and Horizon’s perception algorithms. It enables OEMs and Tier 1s to create advanced designs at high energy efficiency and high cost effectiveness, while delivering high-performance inference results.

Please see here for more information on Horizon Robotics and its Journey 2. The Vision Product of the Year Awards are open to Member companies of the Edge AI and Vision Alliance and celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes leadership in computer vision as evaluated by independent industry experts.

EMBEDDED VISION SUMMIT MEDIA PARTNER SHOWCASE

AspencoreAspencore
Everyone wants safety on the road. Can advancements in sensing and decision-making technologies help drivers, passengers and vulnerable road users? Advanced driver-assistance systems (ADAS) and autonomous vehicles (AV) are still works in progress that rely on constantly evolving technologies. The newly published 152-page book “Sensors in Automotive“, with contributions from leading thinkers of the automotive industry, marks and heralds the industry’s progress, identifies the remaining challenges, and examines with an unbiased eye what it will take to overcome them.

Vision Systems DesignVision Systems Design
Vision Systems Design is the machine vision and imaging-resource for engineers and integrators worldwide. Receive unique, unbiased and in-depth technical information about the design of machine vision and imaging systems for demanding applications in your inbox today.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top