PROVIDER

PerCV.ai: How a Vision AI Platform and the STM32N6 can Turn Around an 80% Failure Rate for AI Projects

This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics. The vision AI platform PerCV.ai (pronounced Perceive AI), could be the secret weapon that enables a company to deploy an AI application when so many others fail. The solution from Irida Labs, a member of the ST […]

PerCV.ai: How a Vision AI Platform and the STM32N6 can Turn Around an 80% Failure Rate for AI Projects Read More +

“Vision-language Models on the Edge,” a Presentation from Hugging Face

Cyril Zakka, Health Lead at Hugging Face, presents the “Vision-language Models on the Edge” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Zakka provides an overview of vision-language models (VLMs) and their deployment on edge devices using Hugging Face’s recently released SmolVLM as an example. He examines the training process of VLMs,

“Vision-language Models on the Edge,” a Presentation from Hugging Face Read More +

e-con Systems Expands Camera Support for Renesas’ New RZ/G3E, Enabling Reliable Edge AI Vision Solutions

California & Chennai (Aug 27, 2025) – e-con Systems®, a leading embedded vision solutions provider, announces camera support for Renesas’ latest RZ/G3E microprocessor, strengthening its partnership with Renesas in powering next-generation embedded vision applications such as industrial automation, smart city, automotive and more. Building on our successful integration with Renesas’ RZ/V2N and RZ/V2H processors, e-con

e-con Systems Expands Camera Support for Renesas’ New RZ/G3E, Enabling Reliable Edge AI Vision Solutions Read More +

OwLite Meets Qualcomm Neural Network: Unlocking On-device AI Performance

This blog post was originally published at SqueezeBits’ website. It is reprinted here with the permission of SqueezeBits. At SqueezeBits we have been empowering developers to efficiently deploy complex AI models while minimizing performance trade-offs with OwLite toolkit. With OwLite v2.5, we’re excited to announce official support for Qualcomm Neural Network (QNN) through seamless integration

OwLite Meets Qualcomm Neural Network: Unlocking On-device AI Performance Read More +

“Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration,” a Presentation from Google

Niyati Prajapati, ML and Generative AI Lead at Google, presents the “Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration” tutorial at the May 2025 Embedded Vision Summit. In this talk, Prajapati explores how vision LLMs can be used in multi-agent collaborative systems to enable new levels of capability and autonomy. She explains the architecture

“Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration,” a Presentation from Google Read More +

NVIDIA and Intel to Develop AI Infrastructure and Personal Computing Products

Intel to design and manufacture custom data center and client CPUs with NVIDIA NVLink; NVIDIA to invest $5 billion in Intel common stock September 18, 2025 – NVIDIA (NASDAQ: NVDA) and Intel Corporation (NASDAQ: INTC) today announced a collaboration to jointly develop multiple generations of custom data center and PC products that accelerate applications and workloads

NVIDIA and Intel to Develop AI Infrastructure and Personal Computing Products Read More +

Shifting AI Inference from the Cloud to Your Phone Can Reduce AI Costs

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Every AI query has a cost, and not just in dollars. Study shows distributing AI workloads to your devices — such as your smartphone — can reduce costs and decrease water consumption What you should know: Study

Shifting AI Inference from the Cloud to Your Phone Can Reduce AI Costs Read More +

“Building Agentic Applications for the Edge,” a Presentation from GMAC Intelligence

Amit Mate, Founder and CEO of GMAC Intelligence, presents the “Building Agentic Applications for the Edge” tutorial at the May 2025 Embedded Vision Summit. Along with AI agents, the new generation of large language models, vision-language models and other large multimodal models are enabling powerful new capabilities that promise to transform industries. In this talk,

“Building Agentic Applications for the Edge,” a Presentation from GMAC Intelligence Read More +

FRAMOS Extends D400e 3D Camera Support to NVIDIA Holoscan Sensor Bridge

Industrial-grade depth sensing now integrates seamlessly with edge AI pipelines on NVIDIA Jetson AGX Orin and Jetson AGX Thor Munich, Bavaria, Germany – September 17, 2025.– FRAMOS announces official support for NVIDIA Holoscan Sensor Bridge on its D400e series of industrial 3D depth cameras, enabling seamless integration of RealSense™-based stereo vision into advanced edge AI

FRAMOS Extends D400e 3D Camera Support to NVIDIA Holoscan Sensor Bridge Read More +

Edge AI and Vision Insights: September 17, 2025

LETTER FROM THE EDITOR Dear Colleague, Next Tuesday, September 23, 2025 at 9 am PT, the Yole Group will present the free webinar “Infrared Imaging: Technologies, Trends, Opportunities and Forecasts” in partnership with the Edge AI and Vision Alliance. Infrared (IR) cameras are increasingly finding adoption in a diversity of applications, as an adjunct or

Edge AI and Vision Insights: September 17, 2025 Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top