Brian Dipert

US Export Controls on AI Chips Boost Domestic Innovation in China

AI chips for data centers, rely on international collaboration in design, manufacturing, and distribution, however the US has cornered China by restricting this collaboration. These AI processors see increasing demand in data centers, but this comes with high energy consumption and capital costs. The discussion around advanced chips for artificial intelligence, driven by billions in […]

US Export Controls on AI Chips Boost Domestic Innovation in China Read More +

Software-defined Vehicles, Connected Cars, and AI in Cars 2026-2036: Markets, Trends, and Forecasts

For more information, visit https://www.idtechex.com/en/research-report/software-defined-vehicles-connected-cars-and-ai-in-cars/1108. SDV Central compute will hit US$755B by 2029; SDV feature revenue will grow 34% CAGR by 2035. The automotive industry is undergoing a foundational shift toward software-defined vehicles (SDVs), where functionality, value, and user experience are increasingly governed by software rather than hardware. IDTechEx’s report, “Software-Defined Vehicles, Connected Cars, and

Software-defined Vehicles, Connected Cars, and AI in Cars 2026-2036: Markets, Trends, and Forecasts Read More +

e-con Systems Launches ONVIF-compliant 4K HDR GigE Camera for Smart Vision Applications

Reliable Vision Meets Low-Light Excellence California & Chennai (July 9, 2025): e-con Systems®, a global leader in embedded vision solutions, is excited to introduce the ONVIF-compliant 4K HDR GigE Camera – RouteCAM_CU86 – built with the Sony STARVIS 2 IMX678 sensor and engineered to provide exceptional image clarity and reliability even in the most challenging

e-con Systems Launches ONVIF-compliant 4K HDR GigE Camera for Smart Vision Applications Read More +

Edge AI and Vision Insights: July 9, 2025

NEW APPROACHES TO VISION AND MULTIMEDIA AT THE EDGE How Qualcomm Is Powering AI-Driven Multimedia at the Edge In this 2025 Embedded Vision Summit talk, Ning Bi, Vice President of Engineering at Qualcomm Technologies, explores the evolution of multimedia processing at the edge, from simple early use cases such as audio and video processing powered

Edge AI and Vision Insights: July 9, 2025 Read More +

Why HDR and LED Flicker Mitigation Are Game-changers for Forward-facing Cameras in ADAS

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In ADAS, forward-facing cameras capture traffic signs, signals, and pedestrians at farther distances using a narrow field of view (FOV). This narrower angle enables the camera to focus on distant objects with greater accuracy, making

Why HDR and LED Flicker Mitigation Are Game-changers for Forward-facing Cameras in ADAS Read More +

Aizip Demonstration of Its Personal Offline AI Assistant for Biking on a Cadence Tensilica HiFi DSP Platform

Nathan Francis, Head of Business Development at Aizip, demonstrates the company’s latest edge AI and vision technologies and products in Cadence’s booth at the 2025 Embedded Vision Summit. Specifically, Francis demonstrates the capabilities of his company’s small language model capable of running on a bike computer. You can’t assume internet connectivity when biking in the

Aizip Demonstration of Its Personal Offline AI Assistant for Biking on a Cadence Tensilica HiFi DSP Platform Read More +

DeGirum Demonstration of Its PySDK Running on BrainChip Hardware for Real-time Edge AI

Stephan Sokolov, Software Engineer at DeGirum, demonstrates the company’s latest edge AI and vision technologies and products in BrainChip’s booth at the 2025 Embedded Vision Summit. Specifically, Sokolov demonstrates the power of real-time AI inference at the edge, running DeGirum’s PySDK application directly on BrainChip hardware. This demo showcases low-latency, high-efficiency performance as a script

DeGirum Demonstration of Its PySDK Running on BrainChip Hardware for Real-time Edge AI Read More +

Best-in-class Multimodal RAG: How the Llama 3.2 NeMo Retriever Embedding Model Boosts Pipeline Accuracy

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Data goes far beyond text—it is inherently multimodal, encompassing images, video, audio, and more, often in complex and unstructured formats. While the common method is to convert PDFs, scanned images, slides, and other documents into text, it

Best-in-class Multimodal RAG: How the Llama 3.2 NeMo Retriever Embedding Model Boosts Pipeline Accuracy Read More +

BrainChip Demonstration of LLM Inference On an FPGA at the Edge using the TENNs Framework

Kurt Manninen, Senior Solutions Architect at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Van Manninen demonstrates his company’s large language models (LLMs) running on an FPGA edge device, powered by BrainChip’s proprietary TENNs (Temporal Event-Based Neural Networks) framework. BrainChip enables real-time generative AI

BrainChip Demonstration of LLM Inference On an FPGA at the Edge using the TENNs Framework Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top