fbpx

Embedded Vision Insights: January 14, 2020 Edition

EVA180x100

 

LETTER FROM THE EDITOR
Dear Colleague,Yole Développement webinar

On Wednesday, February 19 at 8 am PT, leading market analyst firm Yole Développement will deliver the free webinar “3D Imaging and Sensing: From Enhanced Photography to an Enabling Technology for AR and VR” in partnership with the Embedded Vision Alliance. In 2017, Apple brought an innovative use case to mobile devices: a structured light-based 3D sensing camera module that enables users to rapidly and reliably unlock their devices using only their faces. Android-based mobile device manufacturers have added front depth sensors to their products in response, and are now also adding rear-mounted depth sensors. So far, at least, the applications of these rear-mounted depth sensors are predominantly photography-related. In the near future, however, they’re expected to further expand into augmented reality, virtual reality and other applications. In this webinar, Yole Développement will describe the application roadmap, market value and cost of those highly anticipated mobile 3D sensing modules, including topics such as CMOS image sensors, optical elements and VCSEL illumination. For more information and to register, please see the event page.

Registration for the 2020 Embedded Vision Summit, the preeminent conference on practical visual AI and computer vision, is now open. Be sure to register today with promo code SUPEREARLYBIRD20 to receive your Super Early Bird Discount! Also, sponsoring or exhibiting at the Summit is an amazing opportunity to engage with a uniquely qualified audience. Your company can be an integral part of the only global event dedicated to enabling product creators to harness computer vision and visual AI for practical applications! For more information, see the Summit Sponsors and Exhibitors page or email [email protected].

The Embedded Vision Alliance is also now accepting applications for the fifth annual Vision Tank start-up competition. Are you an early-stage start-up company developing a new product or service incorporating or enabling computer vision or visual AI? Do you want to raise awareness of your company and products with vision industry experts, investors and customers? The Vision Tank start-up competition offers early-stage companies the opportunity to present their new products or product ideas to more than 1,400 influencers and product creators at the 2020 Embedded Vision Summit. For more information and to enter, please see the program page.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

DEPTH SENSING

The Embedded Vision Alliance’s most recently completed Computer Vision Developer Survey reveals rapid growth in the use of depth sensing in commercial systems and applications.  Check out the following presentations, along with the above-mentioned upcoming webinar, for expert insights and recent innovations in depth sensing.

How to Choose a 3D Vision SensorCapable Robot Components
Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are a wide variety of sensors on the market, employing modalities including passive stereo, active stereo, time of flight, 2D and 3D lasers and monocular approaches. This talk from Chris Osterwood, Founder and CEO of Capable Robot Components, provides an overview of 3D vision sensor technologies and their capabilities and limitations, based on Osterwood’s experience selecting the right 3D technology and sensor for a diverse range of autonomous robot designs. There is no perfect sensor technology and no perfect sensor, but there is always a sensor which best aligns with the requirements of your application—you just need to find it. Osterwood describes a quantitative and qualitative evaluation process for 3D vision sensors, including testing processes using both controlled environments and field testing, and some surprising characteristics and limitations he’s uncovered through that testing.

Depth Sensing Technique Enables Simpler, More Flexible 3D SolutionsMagik Eye
Magik Eye is a global team of computer vision veterans that have developed a new method to determine depth from light directly without the need to measure time, find a complex pattern or perform complex computation. The Magik Eye technique dramatically reduces the power, compute and size of a depth-sensing module while enabling a wide range of FOV, point resolutions, material sensitivities and distances. This presentation from Takeo Miyazawa, Founder and CEO of Magik Eye, discusses the new technique and shows working samples in action.

MANUFACTURING APPLICATIONS

Deep Learning for Manufacturing Inspection ApplicationsFLIR Systems
Recently, deep learning has revolutionized artificial intelligence and has been shown to provide the best solutions to many problems in computer vision, image classification, speech recognition and natural language processing. Deep learning has gained significant attention in the machine vision industry because it does not require the complex algorithm development used by traditional rule-based image processing techniques. In this presentation, Stephen Se, Research Manager at FLIR Systems, covers the deep learning workflow from data collection to training and deployment, as well as the process of transfer learning. Se presents his company’s deep learning activities for machine vision applications such as manufacturing inspection, defect detection and classification. He also presents two case studies where his company applies transfer learning to manufacturing inspection applications.

Machine Learning at the Edge in Smart FactoriesTexas Instruments
Whether it’s called “Industry 4.0,” “industrial internet of things” (IIOT) or “smart factories,” a fundamental shift is underway in manufacturing: factories are becoming smarter. This is enabled by networks of connected devices forming systems that collect, monitor, exchange and analyze data. Machine learning, including deep neural network algorithms such as convolution neural networks (CNNs), are enabling smart robots and machines to autonomously complete tasks with precision, accuracy and speed. The Texas Instruments (TI) Sitara line of processors is helping to enable vision-based deep learning inference at the edge in factory automation products. TI’s AM57x class processors with specialized neural network accelerators and integrated industrial peripherals provide the processing and connectivity needed to enable smart factory vision applications to reduce production costs, improve quality and create safer work environments. This presentation from Manisha Agrawal, Software Applications Engineer at Texas Instruments, covers TI’s deep learning solution on AM57x processors for smart factories.

UPCOMING INDUSTRY EVENTS

Yole Développement Webinar – 3D Imaging and Sensing: From Enhanced Photography to an Enabling Technology for AR and VR: February 19, 2020, 8:00 am PT

Embedded Vision Summit: May 18-21, 2020, Santa Clara, California

More Events

FEATURED NEWS

MediaTek Announces Dimensity 800 5G Series Chipsets for New Premium 5G Smartphones

AnyConnect and ASUS Bring Smarter Camera AI to the Edge

New Low-power, High-performance TI Jacinto 7 Processors Enable Mass-market Adoption of Automotive ADAS and Gateway Technology

SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets

Qualcomm Accelerates Autonomous Driving with New Platform – Qualcomm Snapdragon Ride

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top