PROVIDER

Semiconductors at the Heart of Automotive’s Next Chapter

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Automotive White Paper, Vol.2, Powered by Yole Group – Shifting gears! KEY TAKEAWAYS The automotive semiconductor market will soar from $68 billion in 2024 to $132 billion in 2030, growing at a […]

Semiconductors at the Heart of Automotive’s Next Chapter Read More +

How Do You Teach an AI Model to Reason? With Humans

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA’s data factory team creates the foundation for AI models like Cosmos Reason, which today topped the physical reasoning leaderboard on Hugging Face. AI models are advancing at a rapid rate and scale. But what might they

How Do You Teach an AI Model to Reason? With Humans Read More +

“Scaling Machine Learning with Containers: Lessons Learned,” a Presentation from Instrumental

Rustem Feyzkhanov, Machine Learning Engineer at Instrumental, presents the “Scaling Machine Learning with Containers: Lessons Learned” tutorial at the May 2025 Embedded Vision Summit. In the dynamic world of machine learning, efficiently scaling solutions from research to production is crucial. In this presentation, Feyzkhanov explores the nuances of scaling machine… “Scaling Machine Learning with Containers:

“Scaling Machine Learning with Containers: Lessons Learned,” a Presentation from Instrumental Read More +

PerCV.ai: How a Vision AI Platform and the STM32N6 can Turn Around an 80% Failure Rate for AI Projects

This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics. The vision AI platform PerCV.ai (pronounced Perceive AI), could be the secret weapon that enables a company to deploy an AI application when so many others fail. The solution from Irida Labs, a member of the ST

PerCV.ai: How a Vision AI Platform and the STM32N6 can Turn Around an 80% Failure Rate for AI Projects Read More +

“Vision-language Models on the Edge,” a Presentation from Hugging Face

Cyril Zakka, Health Lead at Hugging Face, presents the “Vision-language Models on the Edge” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Zakka provides an overview of vision-language models (VLMs) and their deployment on edge devices using Hugging Face’s recently released SmolVLM as an example. He examines… “Vision-language Models on the Edge,”

“Vision-language Models on the Edge,” a Presentation from Hugging Face Read More +

e-con Systems Expands Camera Support for Renesas’ New RZ/G3E, Enabling Reliable Edge AI Vision Solutions

California & Chennai (Aug 27, 2025) – e-con Systems®, a leading embedded vision solutions provider, announces camera support for Renesas’ latest RZ/G3E microprocessor, strengthening its partnership with Renesas in powering next-generation embedded vision applications such as industrial automation, smart city, automotive and more. Building on our successful integration with Renesas’ RZ/V2N and RZ/V2H processors, e-con

e-con Systems Expands Camera Support for Renesas’ New RZ/G3E, Enabling Reliable Edge AI Vision Solutions Read More +

OwLite Meets Qualcomm Neural Network: Unlocking On-device AI Performance

This blog post was originally published at SqueezeBits’ website. It is reprinted here with the permission of SqueezeBits. At SqueezeBits we have been empowering developers to efficiently deploy complex AI models while minimizing performance trade-offs with OwLite toolkit. With OwLite v2.5, we’re excited to announce official support for Qualcomm Neural Network (QNN) through seamless integration

OwLite Meets Qualcomm Neural Network: Unlocking On-device AI Performance Read More +

“Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration,” a Presentation from Google

Niyati Prajapati, ML and Generative AI Lead at Google, presents the “Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration” tutorial at the May 2025 Embedded Vision Summit. In this talk, Prajapati explores how vision LLMs can be used in multi-agent collaborative systems to enable new levels of capability and… “Vision LLMs in Multi-agent Collaborative

“Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration,” a Presentation from Google Read More +

NVIDIA and Intel to Develop AI Infrastructure and Personal Computing Products

Intel to design and manufacture custom data center and client CPUs with NVIDIA NVLink; NVIDIA to invest $5 billion in Intel common stock September 18, 2025 – NVIDIA (NASDAQ: NVDA) and Intel Corporation (NASDAQ: INTC) today announced a collaboration to jointly develop multiple generations of custom data center and PC products that accelerate applications and workloads

NVIDIA and Intel to Develop AI Infrastructure and Personal Computing Products Read More +

Shifting AI Inference from the Cloud to Your Phone Can Reduce AI Costs

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Every AI query has a cost, and not just in dollars. Study shows distributing AI workloads to your devices — such as your smartphone — can reduce costs and decrease water consumption What you should know: Study

Shifting AI Inference from the Cloud to Your Phone Can Reduce AI Costs Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top