TECHNOLOGIES

Production-Ready, Full-Stack Edge AI Solutions Turn Microchip’s MCUs and MPUs Into Catalysts for Intelligent Real-Time Decision-Making

Chandler, Ariz., February 10, 2026 — A major next step for artificial intelligence (AI) and machine learning (ML) innovation is moving ML models from the cloud to the edge for real-time inferencing and decision-making applications in today’s industrial, automotive, data center and consumer Internet of Things (IoT) networks. Microchip Technology (Nasdaq: MCHP) has extended its edge AI offering […]

Production-Ready, Full-Stack Edge AI Solutions Turn Microchip’s MCUs and MPUs Into Catalysts for Intelligent Real-Time Decision-Making Read More +

Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit

This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments. Introduction Continuous innovation in high-performance, power-efficient systems-on-a-chip (SoCs) is enabling safer, smarter and more autonomous driving experiences in even more vehicles. As another big step forward, Texas Instruments and Synopsys developed a Virtualizer Development Kit™ (VDK) for the

Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit Read More +

Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse. New NVIDIA safety

Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems Read More +

What Sensor Fusion Architecture Offers for NVIDIA Orin NX-Based Autonomous Vision Systems

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways Why multi-sensor timing drift weakens edge AI perception How GNSS-disciplined clocks align cameras, LiDAR, radar, and IMUs Role of Orin NX as a central timing authority for sensor fusion Operational gains from unified time-stamping

What Sensor Fusion Architecture Offers for NVIDIA Orin NX-Based Autonomous Vision Systems Read More +

Enhancing Images: Adaptive Shadow Correction Using OpenCV

This blog post was originally published at OpenCV’s website. It is reprinted here with the permission of OpenCV. Imagine capturing the perfect landscape photo on a sunny day, only to find harsh shadows obscuring key details and distorting colors. Similarly, in computer vision projects, shadows can interfere with object detection algorithms, leading to inaccurate results.

Enhancing Images: Adaptive Shadow Correction Using OpenCV Read More +

Driving the Future of Automotive AI: Meet RoX AI Studio

This blog post was originally published at Renesas’ website. It is reprinted here with the permission of Renesas. In today’s automotive industry, onboard AI inference engines drive numerous safety-critical Advanced Driver Assistance Systems (ADAS) features, all of which require consistent, high-performance processing. Given that AI model engineering is inherently iterative (numerous cycles of ‘train, validate, and

Driving the Future of Automotive AI: Meet RoX AI Studio Read More +

Upcoming Webinar on Industrial 3D Vision with iToF Technology

On February 18, 2026, at 9:00 am PST (12:00 pm EST), and on February 19, 2026 at 11:00 am CET, Alliance Member company e-con Systems in partnership with onsemi will deliver a webinar “Enabling Reliable Industrial 3D Vision with iToF Technology” From the event page: Join e-con Systems and onsemi for an exclusive joint webinar

Upcoming Webinar on Industrial 3D Vision with iToF Technology Read More +

Production Software Meets Production Hardware: Jetson Provisioning Now Available with Avocado OS

This blog post was originally published at Peridio’s website. It is reprinted here with the permission of Peridio. The gap between robotics prototypes and production deployments has always been an infrastructure problem disguised as a hardware problem. Teams build incredible computer vision models and robotic control systems on NVIDIA Jetson developer kits, only to hit

Production Software Meets Production Hardware: Jetson Provisioning Now Available with Avocado OS Read More +

Google Adds “Agentic Vision” to Gemini 3 Flash

Jan. 30, 2026 — Google has announced Agentic Vision, a new capability in Gemini 3 Flash that turns image understanding into an active, tool-using workflow rather than a single “static glance.” Agentic Vision pairs visual reasoning with code execution (Python) so the model can iteratively zoom in, crop, annotate, and otherwise manipulate an image to

Google Adds “Agentic Vision” to Gemini 3 Flash Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top