TECHNOLOGIES

TI Accelerates the Next Generation of Physical AI with NVIDIA

News highlights: TI and NVIDIA are collaborating to accelerate the path from simulation to the safe deployment of humanoid robots in the real world. As part of this collaboration, TI integrated its mmWave radar technology with NVIDIA Jetson Thor and NVIDIA Holoscan to enable low-latency 3D perception and safety awareness for physical AI applications. TI […]

TI Accelerates the Next Generation of Physical AI with NVIDIA Read More +

ModelCat AI Announces AI Model Portability Across Silicon Devices

An industry first, ModelCat’s Agentic AI generates models for new chips using a user’s current production models, dramatically accelerating inferencing to the edge. SUNNYVALE, Calif., March 5, 2026 /PRNewswire/ — ModelCat, the creator of the world’s first fully autonomous AI model builder, today announced its latest innovative platform capability: Model Retargeting (Patent Pending). Using Model Retargeting, ModelCat customers gain model

ModelCat AI Announces AI Model Portability Across Silicon Devices Read More +

STM32U3B5/U3C5: Bringing High-Performance DSP & Edge AI to Ultralow Power Designs

Built on the Arm® Cortex®‑M33 core, the STM32U3B5/U3C5 MCUs combine up to 2 Mbytes of dual‑bank flash memory with 640 Kbytes of RAM and are available in packages from 48 to 144 pins (UFQFPN, WLCSP, LQFP, and UFBGA). The lines introduce a hardware signal processor (HSP) to the STM32U3 portfolio, offloading complex DSP and edge‑AI workloads and

STM32U3B5/U3C5: Bringing High-Performance DSP & Edge AI to Ultralow Power Designs Read More +

From ADAS to Robotaxi: How Vision Systems Must Level Up to Meet New Mobility Use Cases (Part 2)

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways How urban lighting and motion define robotaxi imaging needs Which camera features support reliable perception during day and night operation Why unified AI vision boxes reduce latency and coordination gaps How integrated vision platforms

From ADAS to Robotaxi: How Vision Systems Must Level Up to Meet New Mobility Use Cases (Part 2) Read More +

Accelerating Product Development in the Era of Physical AI

This video was originally published at Peridio’s website. It is reprinted here with the permission of Peridio. The embedded world is undergoing its biggest transformation in a generation. AI workloads are now moving into the physical world — into cameras, robots, tractors, and drones — and edge devices are evolving into intelligent agents. Yet the

Accelerating Product Development in the Era of Physical AI Read More +

Airy3D and Lattice to Showcase Compact, Integrated Humanoid and Robotic 3D Vision Demo at Embedded World 2026

Montreal, Canada — March 4, 2026 — Airy3D today announced a joint demonstration with Lattice Semiconductor highlighting a compact and compute-efficient 3D vision solution for humanoids and advanced robotics, which will be on display at Embedded World 2026. The demo combines Airy3D’s DepthIQ™ technology with a compact, low-power Lattice CrossLink™-NX FPGA to enable high-quality depth

Airy3D and Lattice to Showcase Compact, Integrated Humanoid and Robotic 3D Vision Demo at Embedded World 2026 Read More +

Multi-Sensor IoT architecture: inside the stack and how to scale it

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. What Is a Multi-Sensor Stack, Really? At its core, a multi-sensor stack is a layered system where multiple sensor types (visual, thermal, acoustic, motion, environmental) work in parallel to generate a contextual understanding of the world around

Multi-Sensor IoT architecture: inside the stack and how to scale it Read More +

Why On-device AI Matters

This blog post was originally published at ENERZAi’s website. It is reprinted here with the permission of ENERZAi. Hello! I’m Minwoo Son from ENERZAi’s Business Development team. Through several posts so far, we’ve shared ENERZAi’s full-stack software capabilities for delivering high-performance on-device AI — including Optimium, our proprietary AI compiler that encapsulates our optimization expertise;

Why On-device AI Matters Read More +

Upcoming Webinar on LLM-driven Driver Development

On March 19, 2026, at 1:00 pm EDT (10:00 am PDT) Boston.AI will deliver a webinar “Intelligent Driver Development with LLM Context Engineering ” From the event page: Developing even simple sensor drivers can consume valuable engineering time, requiring manual transcription of registers from datasheets into code—an error-prone and repetitive process. In this webinar, you’ll

Upcoming Webinar on LLM-driven Driver Development Read More +

From ADAS to Robotaxi: How to Overcome the Major Vision Challenges

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways Why does robotaxi vision need more than task-driven ADAS sensing? Impact of long-duty operation and changing lighting on perception reliability Challenges faced across vehicles, cities, and operating conditions How visual data continuity affects

From ADAS to Robotaxi: How to Overcome the Major Vision Challenges Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top