APPLICATIONS

Liquid Lens Autofocus vs Voice Coil Motor (VCM) Autofocus

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Autofocus is one of the key features required for many embedded vision applications. An autofocus camera module is typically used when the distance to the target object keeps varying during image capture. This requires the […]

Liquid Lens Autofocus vs Voice Coil Motor (VCM) Autofocus Read More +

LAON PEOPLE Demonstration of Surface Inspection Using a Deep Learning Solution

Henry Sang, Director of Business Development at LAON PEOPLE, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Sang demonstrates fully automatic inspection of complex surfaces (e.g., automobiles) using LAON PEOPLE’s advanced machine vision camera and deep learning algorithms. The solution is able to spot defects

LAON PEOPLE Demonstration of Surface Inspection Using a Deep Learning Solution Read More +

LAON PEOPLE Demonstration of Traffic Analysis Using a Deep Learning Solution

Luke Faubion, Traffic Solution Director at LAON PEOPLE, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Faubion demonstrates traffic analysis using the company’s deep learning solution. The traffic analysis program Faubion demonstrates doesn’t require installing a new IP camera. LAON PEOPLE’s AI solution provides vehicle,

LAON PEOPLE Demonstration of Traffic Analysis Using a Deep Learning Solution Read More +

aiMotive Announces aiDrive 3.0

Revolutionary virtual sensor technology brings unparalleled data re-use for all levels of autonomy – offering faster go to market times for OEMs and Tier1s striving to deploy the most competitive L2-L4 automated driving solutions Budapest, Munich, Mountain View, Yokohama, 2 December 2021 – aiMotive, one of the world’s leading modular automated driving technology suppliers, announced

aiMotive Announces aiDrive 3.0 Read More +

videantis Achieves Automotive SPICE® Level 2 Certification

videantis software development is compliant to Automotive SPICE® Level 2 Assessment was conducted by Continental SQM Automotive SPICE® Level 2 certification of videantis software offerings reduces development cost and accelerates time-to-production for Tier 1s and OEMs Hannover, Germany, December 1, 2021 – Today, videantis, a leading supplier of deep learning, computer vision, image processing, and

videantis Achieves Automotive SPICE® Level 2 Certification Read More +

ADLINK Releases its First SMARC Module Based on Qualcomm QRB5165 Enabling High Performance Robots and Drones at Low Power

Integrated IoT technologies provide on-device AI capabilities at the edge Summary: The LEC-RB5 SMARC is a high-performance module, built with the Qualcomm® QRB5165 processor, allowing on-device AI and 5G connectivity capabilities for consumer, enterprise and industrial robots. It features a high performance NPU, Octa-Core (8x Arm Coretex-A77 cores) CPU, low power consumption and support for

ADLINK Releases its First SMARC Module Based on Qualcomm QRB5165 Enabling High Performance Robots and Drones at Low Power Read More +

FRAMOS Implements D400e User Space Drivers for Easy Software Installations

Munich, December 1, 2021 – FRAMOS, a global leader in machine vision systems, is now introducing a GigE user space driver to simplify the setup of container virtualisations – such as Docker – for the FRAMOS D400e depth camera series. Customers can use this solution to roll out all the required software, including their application

FRAMOS Implements D400e User Space Drivers for Easy Software Installations Read More +

Coherent Logix Demonstration of Virtual Surround View Fire Detection Using a Deep Neural Network and the HyperX Processor

Martin Hunt, Director of Applications Engineering at Coherent Logix, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Hunt demonstrates virtual surround view fire detection using a deep neural network (DNN) and the company’s HyperX processor. In this demo, Hunt shows the detection of fire using

Coherent Logix Demonstration of Virtual Surround View Fire Detection Using a Deep Neural Network and the HyperX Processor Read More +

Coherent Logix Demonstration of Ultra-low Latency Industrial Inspection at the Edge Using the HyperX Processor

Martin Hunt, Director of Applications Engineering at Coherent Logix, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Hunt demonstrates ultra-low latency industrial inspection at the edge using the company’s HyperX processor. In this demo, Hunt shows how to use the HyperX Memory Network parallel processor

Coherent Logix Demonstration of Ultra-low Latency Industrial Inspection at the Edge Using the HyperX Processor Read More +

BrainChip Demonstration of How the Akida Neural Processor Solves Problems At the Edge

Todd Vierra, Director of Customer Engagements at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the 2021 Embedded Vision Summit. Specifically, Vierra demonstrates how the company’s Akida event-based neural processor (NPU) solves problems at the edge. Utilizing BrainChip’s Akida NPU, you can leverage advanced neuromorphic computing as the engine for

BrainChip Demonstration of How the Akida Neural Processor Solves Problems At the Edge Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top