Software

Software for Embedded Vision

How to Integrate Computer Vision Pipelines with Generative AI and Reasoning

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Generative AI is opening new possibilities for analyzing existing video streams. Video analytics are evolving from counting objects to turning raw video content footage into real-time understanding. This enables more actionable insights. The NVIDIA AI Blueprint for

Read More »

“Scaling Artificial Intelligence and Computer Vision for Conservation,” a Presentation from The Nature Conservancy

Matt Merrifield, Chief Technology Officer at The Nature Conservancy, presents the “Scaling Artificial Intelligence and Computer Vision for Conservation” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Merrifield explains how the world’s largest environmental nonprofit is spearheading projects to scale the use of edge AI and vision… “Scaling Artificial Intelligence and Computer

Read More »

“A Lightweight Camera Stack for Edge AI,” a Presentation from Meta

Jui Garagate, Camera Software Engineer, and Karthick Kumaran, Staff Software Engineer, both of Meta, co-present the “Lightweight Camera Stack for Edge AI” tutorial at the May 2025 Embedded Vision Summit. Electronic products for virtual and augmented reality, home robots and cars deploy multiple cameras for computer vision and AI use… “A Lightweight Camera Stack for

Read More »

“Unlocking Visual Intelligence: Advanced Prompt Engineering for Vision-language Models,” a Presentation from LinkedIn Learning

Alina Li Zhang, Senior Data Scientist and Tech Writer at LinkedIn Learning, presents the “Unlocking Visual Intelligence: Advanced Prompt Engineering for Vision-language Models” tutorial at the May 2025 Embedded Vision Summit. Imagine a world where AI systems automatically detect thefts in grocery stores, ensure construction site safety and identify patient… “Unlocking Visual Intelligence: Advanced Prompt

Read More »

The Edge’s Essential Role in the Future of AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. What you should know: The future of AI will be hybrid, with the cloud and the edge working together — each playing a vital role. The user interface (UI) is now human-centric — your device understands your

Read More »

“Deploying Accelerated ML and AI: The Role of Khronos Open Standards,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, presents the “Deploying Accelerated ML and AI: The Role of Khronos Open Standards” tutorial at the May 2025 Embedded Vision Summit. Accelerating machine learning and AI workloads often requires specialized hardware, but managing compatibility across… “Deploying Accelerated ML and AI:

Read More »

“Scaling Computer Vision at the Edge,” a Presentation from Invisible AI

Eric Danziger, CEO of Invisible AI, presents the “Scaling Computer Vision at the Edge” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Danziger introduces a comprehensive framework for scaling computer vision systems across three critical dimensions: capability evolution, infrastructure decisions and deployment scaling. Today’s leading-edge vision systems… “Scaling Computer Vision at the

Read More »

How Do You Teach an AI Model to Reason? With Humans

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA’s data factory team creates the foundation for AI models like Cosmos Reason, which today topped the physical reasoning leaderboard on Hugging Face. AI models are advancing at a rapid rate and scale. But what might they

Read More »

“Scaling Machine Learning with Containers: Lessons Learned,” a Presentation from Instrumental

Rustem Feyzkhanov, Machine Learning Engineer at Instrumental, presents the “Scaling Machine Learning with Containers: Lessons Learned” tutorial at the May 2025 Embedded Vision Summit. In the dynamic world of machine learning, efficiently scaling solutions from research to production is crucial. In this presentation, Feyzkhanov explores the nuances of scaling machine… “Scaling Machine Learning with Containers:

Read More »

“Vision-language Models on the Edge,” a Presentation from Hugging Face

Cyril Zakka, Health Lead at Hugging Face, presents the “Vision-language Models on the Edge” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Zakka provides an overview of vision-language models (VLMs) and their deployment on edge devices using Hugging Face’s recently released SmolVLM as an example. He examines… “Vision-language Models on the Edge,”

Read More »

OwLite Meets Qualcomm Neural Network: Unlocking On-device AI Performance

This blog post was originally published at SqueezeBits’ website. It is reprinted here with the permission of SqueezeBits. At SqueezeBits we have been empowering developers to efficiently deploy complex AI models while minimizing performance trade-offs with OwLite toolkit. With OwLite v2.5, we’re excited to announce official support for Qualcomm Neural Network (QNN) through seamless integration

Read More »

“Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration,” a Presentation from Google

Niyati Prajapati, ML and Generative AI Lead at Google, presents the “Vision LLMs in Multi-agent Collaborative Systems: Architecture and Integration” tutorial at the May 2025 Embedded Vision Summit. In this talk, Prajapati explores how vision LLMs can be used in multi-agent collaborative systems to enable new levels of capability and… “Vision LLMs in Multi-agent Collaborative

Read More »

Shifting AI Inference from the Cloud to Your Phone Can Reduce AI Costs

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Every AI query has a cost, and not just in dollars. Study shows distributing AI workloads to your devices — such as your smartphone — can reduce costs and decrease water consumption What you should know: Study

Read More »

“Building Agentic Applications for the Edge,” a Presentation from GMAC Intelligence

Amit Mate, Founder and CEO of GMAC Intelligence, presents the “Building Agentic Applications for the Edge” tutorial at the May 2025 Embedded Vision Summit. Along with AI agents, the new generation of large language models, vision-language models and other large multimodal models are enabling powerful new capabilities that promise to… “Building Agentic Applications for the

Read More »

“Enabling Ego Vision Applications on Smart Eyewear Devices,” a Presentation from EssilorLuxottica

Francesca Palermo, Research Principal Investigator at EssilorLuxottica, presents the “Enabling Ego Vision Applications on Smart Eyewear Devices” tutorial at the May 2025 Embedded Vision Summit. Ego vision technology is revolutionizing the capabilities of smart eyewear, enabling applications that understand user actions, estimate human pose and provide spatial awareness through simultaneous… “Enabling Ego Vision Applications on

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top