Blog Posts

SENSING Tech to Debut Three Advanced Vision Solutions at Embedded Vision Summit

May 16, 2025 – SENSING Tech will debut three new visual perception solutions at the upcoming Embedded Vision Summit USA, taking place from May 20 to 22 at the Santa Clara Convention Center. Reflecting the company’s ongoing commitment to imaging innovation, the new lineup includes an 8MP HDR/LFM Camera, a Defrosting & Deicing HDR Camera […]

SENSING Tech to Debut Three Advanced Vision Solutions at Embedded Vision Summit Read More +

Deploying an Efficient Vision-Language Model on Mobile Devices

This blog post was originally published at Nota AI’s website. It is reprinted here with the permission of Nota AI. Recent large language models (LLMs) have demonstrated unprecedented performance in a variety of natural language processing (NLP) tasks. Thanks to their versatile language processing capabilities, it has become possible to develop various NLP applications that

Deploying an Efficient Vision-Language Model on Mobile Devices Read More +

Qualcomm AI Inference Suite: Getting Started is Easy

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Once you have a key, it is simply a matter of choosing how to connect to the inference endpoint.  If you are most comfortable with Python, an SDK is provided along with documentation so that you can

Qualcomm AI Inference Suite: Getting Started is Easy Read More +

What is Image Quality and How is It Validated?

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Image quality is crucial for embedded vision applications, determining how accurately cameras capture the real world. This blog breaks down the key factors affecting camera image quality, including color accuracy, white balance, lens distortion, and

What is Image Quality and How is It Validated? Read More +

LM Studio Accelerates LLM Performance With NVIDIA GeForce RTX GPUs and CUDA 12.8

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Latest release of the desktop application brings enhanced dev tools and model controls, as well as better performance for RTX GPUs. As AI use cases continue to expand — from document summarization to custom software agents —

LM Studio Accelerates LLM Performance With NVIDIA GeForce RTX GPUs and CUDA 12.8 Read More +

AI Agents, Explained: Use Cases, Potential and Limitations

This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. AI agents have taken center stage in tech conversations over the past year. Bold claims swirl about how they’ll reinvent workflows, slash costs, and even replace human teams. But with so much hype in the air, it’s

AI Agents, Explained: Use Cases, Potential and Limitations Read More +

Advancing Generative AI at the Edge During CES 2025

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. For this year’s CES, our theme was Your GenAI Edge—highlighting how Ambarella’s AI SoCs continue to redefine what’s possible with generative AI at the edge. Building on last year’s edge GenAI demos, we debuted a new 25-stream,

Advancing Generative AI at the Edge During CES 2025 Read More +

Image Sensor Selection: Five Tradeoffs Every Vision Engineer Should Nail Before Tapeout

This blog post was originally published at Macnica’s website. It is reprinted here with the permission of Macnica. Choosing an image sensor isn’t just a line item on the BOM – it defines how well your camera, robot or inspection system will perform for the next decade. Our new whitepaper, “Image Sensor Selection: Key Factors to

Image Sensor Selection: Five Tradeoffs Every Vision Engineer Should Nail Before Tapeout Read More +

What is the Role of Cameras in Pick and Place Robots?

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Pick and place robots perform repetitive handling tasks with speed and consistency, making them invaluable across industries. These robots depend heavily on the right camera setup. Get insights about the challenges faced by cameras, their

What is the Role of Cameras in Pick and Place Robots? Read More +

STMicroelectronics Smart Vision Solutions at the 2025 Embedded Vision Summit

STMicroelectronics continues to revolutionize the world of imaging and edge-AI technologies with its innovative ST BrightSense Imaging solutions, ST Flightsense Time-of-Flight technologies and its new Arm® Cortex®-M55-based MCU. Leveraging cutting-edge advancements in CMOS image sensors, in mini-LiDAR with flood illumination and in the ST Neural-ART Accelerator, STMicroelectronics offers demos that highlight the capabilities of their

STMicroelectronics Smart Vision Solutions at the 2025 Embedded Vision Summit Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top