APPLICATIONS

“Tackling Extreme Visual Conditions for Autonomous UAVs In the Wild,” a Presentation from Skydio

Hayk Martiros, Head of Autonomy at Skydio, presents the “Tackling Extreme Visual Conditions for Autonomous UAVs In the Wild” tutorial at the September 2020 Embedded Vision Summit. Skydio ships autonomous robots that are flown at scale in complex, unknown environments every day to capture incredible video, automate dangerous inspections and save lives of first responders. […]

“Tackling Extreme Visual Conditions for Autonomous UAVs In the Wild,” a Presentation from Skydio Read More +

Vision Components Presents the Smallest Embedded Vision System at Embedded World

February 10, 2021 – Probably the world’s smallest embedded vision system – fully integrated in one board and hardly bigger than an image sensor module: VC picoSmart from Vision Components will premiere at embedded world 2021 DIGITAL (March 1–5). Only 22 mm x 23.5 mm in size, this board camera yet contains all components necessary

Vision Components Presents the Smallest Embedded Vision System at Embedded World Read More +

“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentation from Skydio

Gareth Cross, Technical Lead for State Estimation at Skydio, presents the “Introduction to Simultaneous Localization and Mapping (SLAM)” tutorial at the September 2020 Embedded Vision Summit. This talk provides an introduction to the fundamentals of simultaneous localization and mapping (SLAM). Cross provides foundational knowledge; viewers are not expected to have any prerequisite experience in the

“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentation from Skydio Read More +

Accelerating AI-Defined Cars

Convergence of Edge Computing, Machine Vision and 5G-Connected Vehicles Today’s societies are becoming ever more multimedia-centric, data-dependent, and automated. Autonomous systems are hitting our roads, oceans, and air space. Automation, analysis, and intelligence is moving beyond humans to “machine-specific” applications. Computer vision and video for machines will play a significant role in our future digital

Accelerating AI-Defined Cars Read More +

“Multi-modal Re-identification: IOT + Computer Vision for Residential Community Tracking,” a Presentation from Seedland

Kit Thambiratnam, General Manager of the Seedland AI Center, presents the “Multi-modal Re-identification: IOT + Computer Vision for Residential Community Tracking” tutorial at the September 2020 Embedded Vision Summit. The recent COVID-19 outbreak necessitated monitoring in communities such as tracking of quarantined residents and tracking of close-contact interactions with sick individuals. High-density communities also have

“Multi-modal Re-identification: IOT + Computer Vision for Residential Community Tracking,” a Presentation from Seedland Read More +

The Rise and Fall of the ADAS Promise Now Disrupted by AVs

This market research report was originally published at Yole Développement’s website. It is reprinted here with the permission of Yole Développement. Advanced driver assistance systems (ADAS) have been developed for more than ten years now, in pursuit of increased safety in the world of automobiles. Combining a set of sensors, mostly radars and cameras combined

The Rise and Fall of the ADAS Promise Now Disrupted by AVs Read More +

Free Webinar Explores Camera ISP Optimization For Improved Computer Vision Accuracy

On March 30, 2021 at 9 am PT (noon ET), Marc Courtemanche, Product Architect at Algolux, will present the free half-hour webinar “Optimizing a Camera ISP to Automatically Improve Computer Vision Accuracy,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration page: Cameras are the most ubiquitous sensor used

Free Webinar Explores Camera ISP Optimization For Improved Computer Vision Accuracy Read More +

Lattice FPGAs Power Real-Time Radar Adapter Cards

This blog post was originally published at Lattice Semiconductor’s website. It is reprinted here with the permission of Lattice Semiconductor. If you were to ask them (and I have), you would discover that many people think of radar in the context of things like airplanes and ships and the evening weather forecast on TV. As

Lattice FPGAs Power Real-Time Radar Adapter Cards Read More +

“Image-Based Deep Learning for Manufacturing Fault Condition Detection,” a Presentation from Samsung

Jake Lee, Principal Engineer and Head of the Machine Learning Group at Samsung, presents the “Image-Based Deep Learning for Manufacturing Fault Condition Detection” tutorial at the September 2020 Embedded Vision Summit. In this presentation, Lee explores applying deep learning to analyzing manufacturing parameter data to detect fault conditions. The manufacturing parameter data contains multivariate time

“Image-Based Deep Learning for Manufacturing Fault Condition Detection,” a Presentation from Samsung Read More +

Building and Deploying a Face Mask Detection Application Using NGC Collections

This technical article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. AI workflows are complex. Building an AI application is no trivial task, as it takes various stakeholders with domain expertise to develop and deploy the application at scale. Data scientists and developers need easy access to software

Building and Deploying a Face Mask Detection Application Using NGC Collections Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top