Tools

Google Announces LiteRT Qualcomm AI Engine Direct Accelerator

Google has announced a new LiteRT Qualcomm AI Engine Direct Accelerator, giving Android and embedded developers a much more direct path to Qualcomm NPUs for on-device AI and vision workloads. Built on top of Qualcomm’s AI Engine Direct (“QNN”) SDK, the new accelerator replaces the older TensorFlow Lite QNN delegate and plugs directly into LiteRT, […]

Google Announces LiteRT Qualcomm AI Engine Direct Accelerator Read More +

Small Models, Big Heat — Conquering Korean ASR with Low-bit Whisper

This blog post was originally published at ENERZAi’ website. It is reprinted here with the permission of ENERZAi. Today, we’ll share results where we re-trained the original Whisper for optimal Korean ASR(Automatic Speech Recognition), applied Post-Training Quantization (PTQ), and provided a richer Pareto analysis so customers with different constraints and requirements can pick exactly what

Small Models, Big Heat — Conquering Korean ASR with Low-bit Whisper Read More +

Cadence Adds 10 New VIP to Strengthen Verification IP Portfolio for AI Designs

This article was originally published at Cadence’s website. It is reprinted here with the permission of Cadence. Cadence has unveiled 10 Verification IP (VIP) for key emerging interfaces tuned for AI-based designs, including Ultra Accelerator Link (UALink), Ultra Ethernet (UEC), LPDDR6, UCIe 3.0, AMBA CHI-H, Embedded USB v2 (eUSB2), and UniPro 3.0. These new VIP will

Cadence Adds 10 New VIP to Strengthen Verification IP Portfolio for AI Designs Read More +

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI

Quality training data – especially segmented visual data – is a cornerstone of building robust vision models. Meta’s recently announced Segment Anything Model 3 (SAM3) arrives as a potential game-changer in this domain. SAM3 is a unified model that can detect, segment, and even track objects in images and videos using both text and visual

SAM3: A New Era for Open‑Vocabulary Segmentation and Edge AI Read More +

Introducing Gimlet Labs: AI Infrastructure for the Agentic Era

This blog post was originally published at Gimlet Labs’ website. It is reprinted here with the permission of Gimlet Labs. We’re excited to finally share what we’ve been building at Gimlet Labs. Our mission is to make AI workloads 10X more efficient by expanding the pool of usable compute and improving how it’s orchestrated. Over the

Introducing Gimlet Labs: AI Infrastructure for the Agentic Era Read More +

Au-Zone Technologies Expands EdgeFirst Studio Access

Proven MLOps Platform for Spatial Perception at the Edge Now Available   CALGARY, AB – November 19, 2025 – Au-Zone Technologies today expands general access to EdgeFirst Studio™, the enterprise MLOps platform purpose-built for Spatial Perception at the Edge for machines and robotic systems operating in dynamic and uncertain environments. After six months of successful

Au-Zone Technologies Expands EdgeFirst Studio Access Read More +

Reimagining Embedded Audio: MIPI SWI3S Is a Game Changer

This blog post was originally published at MIPI Alliance’s website. It is reprinted here with the permission of MIPI Alliance. As embedded audio systems continue to evolve across consumer electronics, automotive and industrial applications, so does the demand to deliver advanced features—such as far-field voice recognition, spatial audio and “always-on” AI-driven audio processing—within increasingly compact, power-sensitive devices

Reimagining Embedded Audio: MIPI SWI3S Is a Game Changer Read More +

Enabling Autonomous Machines: Advancing 3D Sensor Fusion With Au-Zone

This blog post was originally published at NXP Semiconductors’ website. It is reprinted here with the permission of NXP Semiconductors. Smarter Perception at the Edge Dusty construction sites. Fog-covered fields. Crowded warehouses. Heavy rain. Uneven terrain. What does it take for an autonomous machine to perceive and navigate challenging real-world environments like these – reliably, in

Enabling Autonomous Machines: Advancing 3D Sensor Fusion With Au-Zone Read More +

Why Openness Matters for AI at the Edge

This blog post was originally published at Synaptics’ website. It is reprinted here with the permission of Synaptics. Openness across software, standards, and silicon is critical for ensuring interoperability, flexibility, and the growth of AI at the edge AI continues to migrate towards the edge and is no longer confined to the datacenter. Edge AI brings

Why Openness Matters for AI at the Edge Read More +

Bringing Edge AI Performance to PyTorch Developers with ExecuTorch 1.0

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. ExecuTorch 1.0, an open source solution to training and inference on the Edge, becomes available to all developers Qualcomm Technologies contributed the ExecuTorch repository for developers to access Qualcomm® Hexagon™ NPU directly This streamlines the developer workflow

Bringing Edge AI Performance to PyTorch Developers with ExecuTorch 1.0 Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top