Blog Posts

How Embedded Vision Is Helping Modernize and Future-Proof Retail Operations

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Physical stores are becoming intelligent environments. Embedded vision turns every critical touchpoint into a source of real-time insight, from shelves and kiosks to checkout zones and digital signages. With cameras analyzing activity as it happens, retailers […]

How Embedded Vision Is Helping Modernize and Future-Proof Retail Operations Read More +

The Coming Robotics Revolution: How AI and Macnica’s Capture, Process, Communicate Philosophy Will Define the Next Industrial Era

This blog post was originally published at Macnica’s website. It is reprinted here with the permission of Macnica. Just as networking and fiber-optic infrastructure quietly laid the groundwork for the internet economy, fueling the rise of Amazon, Facebook, and the digital platforms that redefined commerce and communication, today’s breakthroughs in artificial intelligence are setting the stage

The Coming Robotics Revolution: How AI and Macnica’s Capture, Process, Communicate Philosophy Will Define the Next Industrial Era Read More +

Red Light Cameras vs. Traffic Sensors: The Ultimate Guide for Traffic Enforcement

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Intersections create the toughest mix of crashes, congestion, and violations, so cities rely on imaging to bring order and proof. Red light cameras and traffic sensors operate in the same geography yet serve different goals,

Red Light Cameras vs. Traffic Sensors: The Ultimate Guide for Traffic Enforcement Read More +

Grounded AI Starts Here: Rapid Customization for RAG and Context Engineering

This blog post was originally published in expanded form at RapidFire AI’s website. It is reprinted here with the permission of RapidFire AI. Building a reliable Retrieval Augmented Generation (RAG) pipeline should not  feel like guesswork. Yet for most AI developers, it still does. According to a recent MIT study on enterprise AI adoption, around 95%

Grounded AI Starts Here: Rapid Customization for RAG and Context Engineering Read More +

How FPGA-Based Frame Grabbers Are Powering Next-Gen Multi-Camera Systems

This article was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. FPGA-based frame grabbers are redefining multi-camera vision by enabling synchronized aggregation of up to eight GMSL streams for autonomous driving, robotics, and industrial automation. They overcome bandwidth and latency limits of USB and Ethernet by using PCIe

How FPGA-Based Frame Grabbers Are Powering Next-Gen Multi-Camera Systems Read More +

97% Smaller, Just as Smart: Scaling Down Networks with Structured Pruning

This article was originally published at Analog Devices’ website. It is reprinted here with the permission of Analog Devices. Why Smaller Models Matter Shrinking AI models isn’t just a nice-to-have—it’s a necessity for bringing powerful, real-time intelligence to edge devices. Whether it’s smartphones, wearables, or embedded systems, these platforms operate with strict memory, compute, and

97% Smaller, Just as Smart: Scaling Down Networks with Structured Pruning Read More +

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications Read More +

The architecture shift powering next-gen industrial AI

This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. How Arm is powering the shift to flexible AI-ready, energy-efficient compute at the “Industrial Edge.” Industrial automation is undergoing a foundational shift. From industrial PC to edge gateways and smart sensors, compute needs at the edge are changing fast. AI is moving

The architecture shift powering next-gen industrial AI Read More +

Low-Light Image Enhancement: YUV vs RAW – What’s the Difference?

This blog post was originally published at Visidon’s website. It is reprinted here with the permission of Visidon. In the world of embedded vision—whether for mobile phones, surveillance systems, or smart edge devices—image quality in low-light conditions can make or break user experience. That’s where advanced AI-based denoising algorithms come into play. At our company, we

Low-Light Image Enhancement: YUV vs RAW – What’s the Difference? Read More +

Ambarella’s CV3-AD655 Surround View with IMG BXM GPU: A Case Study

The CV3-AD family block diagram. This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies. Ambarella’s CV3-AD655 autonomous driving AI domain controller pairs energy-efficient compute with Imagination’s IMG BXM GPU to enable real-time surround-view visualisation for L2++/L3 vehicles. This case study outlines the industry shift

Ambarella’s CV3-AD655 Surround View with IMG BXM GPU: A Case Study Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top