Resources

Resources

In-depth information about the edge AI and vision applications, technologies, products, markets and trends.

The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.

All Resources

Technologies

D3 Embedded

Texas Instruments, D3 Embedded, Lattice and NVIDIA Show a Practical Radar-Camera Fusion Stack for Robotics

TI’s new application brief and companion demo outline how mmWave radar, camera input, FPGA-based sensor bridging and NVIDIA Holoscan can be combined into a low-latency perception pipeline for humanoids and other autonomous machines.   Texas Instruments, D3 Embedded, Lattice Semiconductor and NVIDIA are outlining a concrete radar-camera fusion stack for robotics rather than just talking

Read More +
Algorithms & Models

Upcoming Webinar on Akida Radar Reference Platform

On April 20, 2026, at 8:00 pm PDT (11:00 am EDT) BrainChip will deliver a webinar “Akida Radar Reference Platform: See the Evolution of Radar Intelligence with AI-Powered Object Classification” From the event page: Join us on 20 April at 8:00 AM PT for an exclusive deep dive into BrainChip’s Radar Reference Platform — bringing

Read More +
Blog Posts

From Connected to Aware: How PSOC™ Edge enables the next wave of smart devices

This blog post was originally published at Infineon’s website. It is reprinted here with the permission of Infineon. Across home, retail, and industry, devices that once followed simple rules are now expected to understand people and context. A thermostat shouldn’t just follow a schedule; it should know if anyone is in the room and choose the preferred

Read More +

Applications

D3 Embedded

Texas Instruments, D3 Embedded, Lattice and NVIDIA Show a Practical Radar-Camera Fusion Stack for Robotics

TI’s new application brief and companion demo outline how mmWave radar, camera input, FPGA-based sensor bridging and NVIDIA Holoscan can be combined into a low-latency perception pipeline for humanoids and other autonomous machines.   Texas Instruments, D3 Embedded, Lattice Semiconductor and NVIDIA are outlining a concrete radar-camera fusion stack for robotics rather than just talking

Read More +
Blog Posts

From Connected to Aware: How PSOC™ Edge enables the next wave of smart devices

This blog post was originally published at Infineon’s website. It is reprinted here with the permission of Infineon. Across home, retail, and industry, devices that once followed simple rules are now expected to understand people and context. A thermostat shouldn’t just follow a schedule; it should know if anyone is in the room and choose the preferred

Read More +
Algorithms & Models

Building Robotics Applications with Ryzen AI and ROS 2

This blog post was originally published at AMD’s website. It is reprinted here with the permission of AMD. This blog showcases how to deploy power-efficient Ryzen AI perception models with ROS 2 – the Robot Operating System. We utilize the Ryzen AI Max+ 395 (Strix-Halo) platform, which is equipped with an efficient Ryzen AI NPU and

Read More +

Functions

Algorithms & Models

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

Read More +
Algorithms & Models

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI

Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

Read More +
Biometrics

TLens vs VCM Autofocus Technology

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we’ll walk you through how TLens technology differs from traditional VCM autofocus, how TLens combined with e-con Systems’ Tinte ISP enhances camera performance, key advantages of TLens over mechanical autofocus systems, and applications

Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top