Blog Posts

The architecture shift powering next-gen industrial AI

This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. How Arm is powering the shift to flexible AI-ready, energy-efficient compute at the “Industrial Edge.” Industrial automation is undergoing a foundational shift. From industrial PC to edge gateways and smart sensors, compute needs at the edge are changing fast. AI is moving […]

The architecture shift powering next-gen industrial AI Read More +

Low-Light Image Enhancement: YUV vs RAW – What’s the Difference?

This blog post was originally published at Visidon’s website. It is reprinted here with the permission of Visidon. In the world of embedded vision—whether for mobile phones, surveillance systems, or smart edge devices—image quality in low-light conditions can make or break user experience. That’s where advanced AI-based denoising algorithms come into play. At our company, we

Low-Light Image Enhancement: YUV vs RAW – What’s the Difference? Read More +

Ambarella’s CV3-AD655 Surround View with IMG BXM GPU: A Case Study

The CV3-AD family block diagram. This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies. Ambarella’s CV3-AD655 autonomous driving AI domain controller pairs energy-efficient compute with Imagination’s IMG BXM GPU to enable real-time surround-view visualisation for L2++/L3 vehicles. This case study outlines the industry shift

Ambarella’s CV3-AD655 Surround View with IMG BXM GPU: A Case Study Read More +

Overcoming the Skies: Navigating the Challenges of Drone Autonomy

This blog post was originally published at Inuitive’s website. It is reprinted here with the permission of Inuitive. From early military prototypes to today’s complex commercial operations, drones have evolved from experimental aircraft into essential tools across industries. Since the FAA issued its first commercial permit in 2006, applications have rapidly expanded—from disaster relief and

Overcoming the Skies: Navigating the Challenges of Drone Autonomy Read More +

NVIDIA Advances Open Model Development for Digital and Physical AI

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA releases new AI tools for speech, safety and autonomous driving — including NVIDIA DRIVE Alpamayo-R1, the world’s first open industry-scale reasoning vision language action model for mobility — and a new independent benchmark recognizes the openness and

NVIDIA Advances Open Model Development for Digital and Physical AI Read More +

Breaking the Human Accuracy Barrier in Computer Vision Labeling

This article was originally published at 3LC’s website. It is reprinted here with the permission of 3LC. Introduction There’s been a lot of excitement lately around how foundation models (such as CLIP, SAM, Grounding DINO, etc.) can come close to human-level performance when labeling common objects, saving a ton of labeling effort and cost. It’s impressive progress. However,

Breaking the Human Accuracy Barrier in Computer Vision Labeling Read More +

Why Edge AI Struggles Towards Production: The Deployment Problem

There is no shortage of articles about how to develop and train Edge AI models. The community has also written extensively about why it makes sense to run those models at the edge: to reduce latency, preserve privacy, and lower data-transfer costs. On top of that, the MLOps ecosystem has matured quickly, providing the pipelines

Why Edge AI Struggles Towards Production: The Deployment Problem Read More +

Let’s Visit the Zoo

This blog post was originally published at Quadric’s website. It is reprinted here with the permission of Quadric. The term “model zoo” first gained prominence in the world of AI / machine learning beginning in the 2016-2017 timeframe.  Originally used to describe open-source public repositories of working AI models – the most prominent of which today

Let’s Visit the Zoo Read More +

Small Models, Big Heat — Conquering Korean ASR with Low-bit Whisper

This blog post was originally published at ENERZAi’ website. It is reprinted here with the permission of ENERZAi. Today, we’ll share results where we re-trained the original Whisper for optimal Korean ASR(Automatic Speech Recognition), applied Post-Training Quantization (PTQ), and provided a richer Pareto analysis so customers with different constraints and requirements can pick exactly what

Small Models, Big Heat — Conquering Korean ASR with Low-bit Whisper Read More +

Introducing Gimlet Labs: AI Infrastructure for the Agentic Era

This blog post was originally published at Gimlet Labs’ website. It is reprinted here with the permission of Gimlet Labs. We’re excited to finally share what we’ve been building at Gimlet Labs. Our mission is to make AI workloads 10X more efficient by expanding the pool of usable compute and improving how it’s orchestrated. Over the

Introducing Gimlet Labs: AI Infrastructure for the Agentic Era Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top