Tools

How to Think About Large Language Models on the Edge

This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip. ChatGPT was released to the public on November 30th, 2022, and the world – at least, the connected world – has not been the same since. Surprisingly, almost three years later, despite massive adoption, we do not […]

How to Think About Large Language Models on the Edge Read More +

3LC Demonstration of Catching Synthetic Slip-ups with 3LC

Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates the investigation of a curious embryo classification study from Norway, where synthetic data was supposed to help train a model – but something didn’t quite hatch right. Using 3LC to

3LC Demonstration of Catching Synthetic Slip-ups with 3LC Read More +

Software-defined Vehicles Drive Next-generation Auto Architectures

SDV Level Chart: Major OEMs compared. The automotive industry is undergoing a foundational shift toward Software-Defined Vehicles (SDVs), where vehicle functionality, user experience, and monetization opportunities are governed increasingly by software rather than hardware. This evolution, captured comprehensively in the latest IDTechEx report, “Software-Defined Vehicles, Connected Cars, and AI in Cars 2026-2036: Markets, Trends, and

Software-defined Vehicles Drive Next-generation Auto Architectures Read More +

One Year of Qualcomm AI Hub: Enabling Developers and Driving the Future of AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. The past year has been an incredible journey for Qualcomm AI Hub. We’ve seen remarkable growth, innovation and momentum — and we’re only getting started. Qualcomm AI Hub has become a key resource for developers looking to

One Year of Qualcomm AI Hub: Enabling Developers and Driving the Future of AI Read More +

3LC Demonstration of Debugging YOLO with 3LC’s Training-time Truth Detector

Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates how to uncover hidden treasures in the COCO dataset – like unlabeled forks and phantom objects – using his platform’s training-time introspection tools. In this demo, 3LC eavesdrops on a

3LC Demonstration of Debugging YOLO with 3LC’s Training-time Truth Detector Read More +

VeriSilicon Demonstration of the Open Se Cura Project

Chris Wang, VP of Multimedia Technologies and a member of CTO office at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Wang demonstrates examples from the Open Se Cura Project, a joint effort between VeriSilicon and Google. The project showcases a scalable, power-efficient, and

VeriSilicon Demonstration of the Open Se Cura Project Read More +

R²D²: Training Generalist Robots with NVIDIA Research Workflows and World Foundation Models

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. A major challenge in robotics is training robots to perform new tasks without the massive effort of collecting and labeling datasets for every new task and environment. Recent research efforts from NVIDIA aim to solve this challenge

R²D²: Training Generalist Robots with NVIDIA Research Workflows and World Foundation Models Read More +

Synopsys Demonstration of Siengine’s AD1000 ADAS Chip, Powered by Synopsys NPX6 NPU IP

Gordon Cooper, Principal Product Manager at Synopsys, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Cooper demonstrates the powerful SiEngine AD1000 NPU and the robust toolchain including debugger, profiler, and simulator, which features Synopsys NPX6 NPU IP. Learn how the platform supports TensorFlow, ONNX, and

Synopsys Demonstration of Siengine’s AD1000 ADAS Chip, Powered by Synopsys NPX6 NPU IP Read More +

Synopsys Demonstration of Smart Architectural Exploration for AI SoCs

Guy Ben Haim, Senior Product Manager, and Gururaj Rao, Field Applications Engineer, both of Synopsys, demonstrate the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Ben Haim and Rao demonstrate how to optimize neural network performance with the Synopsys ARC MetaWare MX Development Toolkit. Ben Haim and

Synopsys Demonstration of Smart Architectural Exploration for AI SoCs Read More +

SqueezeBits Demonstration of On-device LLM Inference, Running a 2.4B Parameter Model on the iPhone 14 Pro

Taesu Kim, CTO of SqueezeBits, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Kim demonstrates a 2.4-billion-parameter large language model (LLM) running entirely on an iPhone 14 Pro without server connectivity. The device operates in airplane mode, highlighting on-device inference using a hybrid approach that

SqueezeBits Demonstration of On-device LLM Inference, Running a 2.4B Parameter Model on the iPhone 14 Pro Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top