Blog Posts

Improving Synthetic Data Augmentation and Human Action Recognition with SynthDa

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Human action recognition is a capability in AI systems designed for safety-critical applications, such as surveillance, eldercare, and industrial monitoring. However, many real-world datasets are limited by data imbalance, privacy constraints, or insufficient coverage of rare but […]

Improving Synthetic Data Augmentation and Human Action Recognition with SynthDa Read More +

Video Self-distillation for Single-image Encoders: Learning Temporal Priors from Unlabeled Video

This blog post was originally published at Nota AI’s website. It is reprinted here with the permission of Nota AI. Proposes a simple next-frame prediction task using unlabeled video to enhance single-image encoders. Injects 3D geometric and temporal priors into image-based models without requiring optical flow or object tracking. Outperforms state-of-the-art self-supervised methods like DoRA

Video Self-distillation for Single-image Encoders: Learning Temporal Priors from Unlabeled Video Read More +

Comparing Synthetic Data Platforms: Synetic AI and NVIDIA Omniverse

This blog post was originally published at Synetic AI’s website. It is reprinted here with the permission of Synetic AI. This blog post compares Synetic AI and NVIDIA Omniverse for synthetic data generation, focusing on deployment-ready computer vision models. Whether you’re exploring simulation tools or evaluating dataset creation platforms, this guide outlines key differences and

Comparing Synthetic Data Platforms: Synetic AI and NVIDIA Omniverse Read More +

Optimizing Your AI Model for the Edge

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Key takeaways: We talk about five techniques—compiling to machine code, quantization, weight pruning, domain-specific fine-tuning, and training small models with larger models—that can be used to improve on-device AI model performance. Whether you think edge AI is

Optimizing Your AI Model for the Edge Read More +

Why HDR and LED Flicker Mitigation Are Game-changers for Forward-facing Cameras in ADAS

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In ADAS, forward-facing cameras capture traffic signs, signals, and pedestrians at farther distances using a narrow field of view (FOV). This narrower angle enables the camera to focus on distant objects with greater accuracy, making

Why HDR and LED Flicker Mitigation Are Game-changers for Forward-facing Cameras in ADAS Read More +

Best-in-class Multimodal RAG: How the Llama 3.2 NeMo Retriever Embedding Model Boosts Pipeline Accuracy

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Data goes far beyond text—it is inherently multimodal, encompassing images, video, audio, and more, often in complex and unstructured formats. While the common method is to convert PDFs, scanned images, slides, and other documents into text, it

Best-in-class Multimodal RAG: How the Llama 3.2 NeMo Retriever Embedding Model Boosts Pipeline Accuracy Read More +

Achieving High-speed Automatic Emergency Braking with AI-driven 4D Imaging Radar

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Across the globe, regulators are accelerating efforts to make roads safer through the widespread adoption of Automatic Emergency Braking (AEB). In the United States, the National Highway Traffic Safety Administration (NHTSA) implemented a sweeping regulation that requires

Achieving High-speed Automatic Emergency Braking with AI-driven 4D Imaging Radar Read More +

Qualcomm Trends and Technologies to Watch In IoT and Edge AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. “It’s amazing how Qualcomm was able to turn the ship on a dime since the last [Embedded World] show. The launch of Qualcomm Dragonwing and the Partner Day event were on point and helpful, showing Qualcomm’s commitment

Qualcomm Trends and Technologies to Watch In IoT and Edge AI Read More +

How Does a Forward-facing Camera Work, and What Are Its Use Cases in ADAS?

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Forward-facing cameras are the proverbial eyes of Advanced Driver Assistance Systems (ADAS). They collect real-time visual data from the vehicle’s surroundings and monitor the road, contributing to the system’s overall situational awareness. They capture key

How Does a Forward-facing Camera Work, and What Are Its Use Cases in ADAS? Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top