PROVIDER

“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentation from Expedera

Ramteja Tadishetti, Principal Software Engineer at Expedera, presents the “Evolving Inference Processor Software Stacks to Support LLMs” tutorial at the May 2025 Embedded Vision Summit. As large language models (LLMs) and vision-language models (VLMs) have quickly become important for edge applications from smartphones to automobiles, chipmakers and IP providers have… “Evolving Inference Processor Software Stacks […]

“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentation from Expedera Read More +

Vision Components at Sensors Converge: VC MIPI Cameras with Cables Up to 10 Meters

Ettlingen, June 18, 2025.  At Sensors Converge, June 24-26 in Santa Clara, CA, Vision Components presents its modular VC MIPI Bricks system with new micro-coax and GMSL2 cable options. They enable to build MIPI-based embedded vision systems with cable lengths of up to 10 meters between the camera module and processor board. The new micro-coax

Vision Components at Sensors Converge: VC MIPI Cameras with Cables Up to 10 Meters Read More +

How Does Region of Interest (ROI)-based Exposure Benefit Embedded Vision Applications?

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. ROI-based Auto Exposure (AE) and High Dynamic Range (HDR) enhance image quality in embedded vision. The See3CAM_CU81, a 4K HDR USB camera, integrates these features for precise exposure and dynamic range control, making it ideal

How Does Region of Interest (ROI)-based Exposure Benefit Embedded Vision Applications? Read More +

“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips

Naitik Nakrani, Solution Architect Manager at eInfochips, presents the “Efficiently Registering Depth and RGB Images” tutorial at the May 2025 Embedded Vision Summit. As depth sensing and computer vision technologies evolve, integrating RGB and depth cameras has become crucial for reliable and precise scene perception. In this session, Nakrani presents… “Efficiently Registering Depth and RGB

“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips Read More +

ImagingNext: New Leading Event for Embedded Vision Starts in September

This year’s focus: AI-Driven Vision: Shaping the Future of Smart Imaging. Munich – June 17th 2025 – The imaging community has a new flagship event that offers vision experts exclusive insights into the latest developments: On September 18, FRAMOS will host ImagingNext, a two-day conference in Munich’s Bogenhausen district at Smartvillage. This year’s conference will

ImagingNext: New Leading Event for Embedded Vision Starts in September Read More +

STMicroelectronics Introduces Advanced Human Presence Detection Solution to Enhance Laptop and PC User Experience

New technology delivers more than 20% power consumption reduction per day in addition to improved security and privacy ST solution combines market leading Time-of-Flight (ToF) sensors and unique AI algorithms for a seamless user experience Geneva, Switzerland, June 17, 2025 — STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics

STMicroelectronics Introduces Advanced Human Presence Detection Solution to Enhance Laptop and PC User Experience Read More +

NVIDIA Holoscan Sensor Bridge Empowers Developers with Real-time Data Processing

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. In the rapidly evolving robotics and edge AI landscape, the ability to efficiently process and transfer sensor data is crucial. Many edge applications are moving away from single-sensor fixed-function solutions and in favor of diverse sensor arrays.

NVIDIA Holoscan Sensor Bridge Empowers Developers with Real-time Data Processing Read More +

“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,” a Presentation from Avassa and OnLogic

Carl Moberg, CTO of Avassa, and Zoie Rittling, Business Development Manager at OnLogic, co-present the “How Right-size and Future-proof a Container-first Edge AI Infrastructure” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Moberg and Rittling provide practical guidance on overcoming key challenges in deploying AI at the… “How to Right-size and Future-proof

“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,” a Presentation from Avassa and OnLogic Read More +

AI On Board: Near Real-time Insights for Sustainable Fishing

This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. Marine ecosystems are under pressure from unsustainable fishing, with some populations declining faster than they can recover. Illegal, unreported, and unregulated (IUU) fishing further contributes to the problem, threatening biodiversity, economies, and global seafood supply chains. While many

AI On Board: Near Real-time Insights for Sustainable Fishing Read More +

“Image Tokenization for Distributed Neural Cascades,” a Presentation from Google and VeriSilicon

Derek Chow, Software Engineer at Google, and Shang-Hung Lin, Vice President of NPU Technology at VeriSilicon, co-present the “Image Tokenization for Distributed Neural Cascades” tutorial at the May 2025 Embedded Vision Summit. Multimodal LLMs promise to bring exciting new abilities to devices! As we see foundational models become more capable,… “Image Tokenization for Distributed Neural

“Image Tokenization for Distributed Neural Cascades,” a Presentation from Google and VeriSilicon Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top