PROVIDER

How FPGA-Based Frame Grabbers Are Powering Next-Gen Multi-Camera Systems

This article was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. FPGA-based frame grabbers are redefining multi-camera vision by enabling synchronized aggregation of up to eight GMSL streams for autonomous driving, robotics, and industrial automation. They overcome bandwidth and latency limits of USB and Ethernet by using PCIe […]

How FPGA-Based Frame Grabbers Are Powering Next-Gen Multi-Camera Systems Read More +

97% Smaller, Just as Smart: Scaling Down Networks with Structured Pruning

This article was originally published at Analog Devices’ website. It is reprinted here with the permission of Analog Devices. Why Smaller Models Matter Shrinking AI models isn’t just a nice-to-have—it’s a necessity for bringing powerful, real-time intelligence to edge devices. Whether it’s smartphones, wearables, or embedded systems, these platforms operate with strict memory, compute, and

97% Smaller, Just as Smart: Scaling Down Networks with Structured Pruning Read More +

NVIDIA Debuts Nemotron 3 Family of Open Models

News Summary: The Nemotron 3 family of open models — in Nano, Super and Ultra sizes — introduces the most efficient family of open models with leading accuracy for building agentic AI applications. Nemotron 3 Nano delivers 4x higher throughput than Nemotron 2 Nano and delivers the most tokens per second for multi-agent systems at scale through a

NVIDIA Debuts Nemotron 3 Family of Open Models Read More +

Better Than Real? What an Apple-Orchard Benchmark Really Says About Synthetic Data for Vision AI

If you work on edge AI or computer vision, you’ve probably run into the same wall over and over: The model architecture is fine. The deployment hardware is (barely) ok. But the data is killing you—too narrow, too noisy, too expensive to expand. That’s true whether you’re counting apples, spotting defects on a production line,

Better Than Real? What an Apple-Orchard Benchmark Really Says About Synthetic Data for Vision AI Read More +

Arm at NeurIPS 2025: How AI Research is Shaping the Future of Intelligent Computing

This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. NeurIPS 2025 provided Arm with a unique opportunity to share the latest technical trends and insights with the global AI research community. NeurIPS is one of the world’s leading AI research conferences, acting as a thriving global hub for

Arm at NeurIPS 2025: How AI Research is Shaping the Future of Intelligent Computing Read More +

The Architecture Shift Powering Next-Gen Industrial AI

This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. How Arm is powering the shift to flexible AI-ready, energy-efficient compute at the “Industrial Edge.” Industrial automation is undergoing a foundational shift. From industrial PC to edge gateways and smart sensors, compute needs at the edge are changing fast. AI is moving

The Architecture Shift Powering Next-Gen Industrial AI Read More +

What is a Dust Denoising Filter in TOF Camera, and How Does it Remove Noise Artifacts in Vision Systems?

This article was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Time-of-Flight (ToF) cameras with IR sensors are susceptible to performance variations caused by environmental dust. This dust can create ‘dust noise’ in the output depth map, directly impacting camera accuracy and, consequently, the reliability of critical

What is a Dust Denoising Filter in TOF Camera, and How Does it Remove Noise Artifacts in Vision Systems? Read More +

Edge AI and Vision Insights: December 10, 2025

LETTER FROM THE EDITOR Dear Colleague, Welcome to our annual CES Special Edition. We’ll give you a rundown on some of the most interesting companies to see  at CES and we’ll highlight some consumer-facing applications of edge AI. Then we’ll continue our focus on technical content with two presentations addressing challenges in developing consumer edge

Edge AI and Vision Insights: December 10, 2025 Read More +

Low-Light Image Enhancement: YUV vs RAW – What’s the Difference?

This blog post was originally published at Visidon’s website. It is reprinted here with the permission of Visidon. In the world of embedded vision—whether for mobile phones, surveillance systems, or smart edge devices—image quality in low-light conditions can make or break user experience. That’s where advanced AI-based denoising algorithms come into play. At our company, we

Low-Light Image Enhancement: YUV vs RAW – What’s the Difference? Read More +

AMD Expands Space-Grade Portfolio, Enhances In-Orbit Processing Capability and Extends Mission Timelines

This blog post was originally published at AMD’s website. It is reprinted here with the permission of AMD. News Snapshot: Expanding space-grade adaptive system-on-chip (SoC) portfolio and qualifying advanced new space-grade, organic lidless packaging technology designed to endure the most demanding conditions of space. AMD is qualifying this enhanced space grade packaging for the Versal™ AI Core

AMD Expands Space-Grade Portfolio, Enhances In-Orbit Processing Capability and Extends Mission Timelines Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top