Software

Software for Embedded Vision

Qualcomm and BMW Group Unveil Groundbreaking Automated Driving System with Jointly Developed Software Stack

Highlights: AI-enabled Snapdragon Ride Pilot Automated Driving System, powered by Snapdragon Ride system-on-chips and a new jointly developed automated driving software stack, debuts in the all-new BMW iX3 at IAA Mobility 2025. System is validated in 60 countries worldwide and is targeted to be available in more than 100 countries by 2026. Scalable platform enabling

Read More »

Building a Versatile Vision Data Simulation Platform: Key Components and Architecture

This blog post was originally published at Geisel Software’s Symage website. It is reprinted here with the permission of Geisel Software. Are Real-World Data Limitations Holding Back Your AI Models? KEY TAKEAWAYS: Versatile Data Simulation Powers Industry Relevance A truly effective vision data simulation platform leverages modular architecture and configurable domain parameters to adapt seamlessly

Read More »

“Taking Computer Vision Products from Prototype to Robust Product,” an Interview with Blue River Technology

Chris Padwick, Machine Learning Engineer at Blue River Technology, talks with Mark Jamtgaard, Director of Technology at RetailNext for the “Taking Computer Vision Products from Prototype to Robust Product,” interview at the May 2025 Embedded Vision Summit. When developing computer vision-based products, getting from a proof of concept to a… “Taking Computer Vision Products from

Read More »

GenAI Firsts: Redefining What’s Possible At the Edge

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. How our pioneering research and leading proof-of-concepts are paving the way for generative AI to scale What you should know: Qualcomm AI Research is pioneering research and inventing novel techniques to deliver efficient, high-performance GenAI solutions. Our

Read More »

“Improving Worksite Safety with AI-powered Perception,” a Presentation from Arcure

Sabri Bayoudh, Chief Innovation Officer at Arcure, presents the “Improving Worksite Safety with AI-powered Perception” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Bayoudhl explores how embedded vision is being used in industrial applications, including vehicle safety and production. He highlights some of the challenging requirements of… “Improving Worksite Safety with AI-powered

Read More »

Software-defined Vehicles: Built For Users, or For the Industry?

SDV Level Chart: IDTechEx defines SDV performance using six levels. Most consumers still have limited awareness of the deeper value behind “software-defined” capabilities The concept of the Software-Defined Vehicle (SDV) has rapidly emerged as a transformative trend reshaping the automotive industry. Yet, despite widespread use of the term, there remains significant confusion around its core

Read More »

How to Support Multi-planar Format in Python V4L2 Applications on i.MX8M Plus

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The default Python V4L2 library module contains critical details related to the V4L2 capture method. Learn how to implement basic definitions (missing the default library module) and capture images in the V4L2 multi-planar format. Python

Read More »

“Introduction to Designing with AI Agents,” a Presentation from Amazon Web Services

Frantz Lohier, Senior Wordwide Specialist for Advanced Computing, AI and Robotics at Amazon Web Services, presents the “Introduction to Designing with AI Agents” tutorial at the May 2025 Embedded Vision Summit. Artificial intelligence agents are components in an AI system that can perform tasks autonomously, making decisions and taking actions… “Introduction to Designing with AI

Read More »

Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. AI and OpenUSD accelerate safe, scalable autonomous vehicle development by enabling simulation-first approaches. Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their

Read More »

“Integrating Cameras with the Robot Operating System (ROS),” a Presentation from Amazon Lab126

Karthik Poduval, Principal Software Development Engineer at Amazon Lab126, presents the “Integrating Cameras with the Robot Operating System (ROS)” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Poduval explores the integration of cameras within the Robot Operating System (ROS) for robust embedded vision applications. He delves into… “Integrating Cameras with the Robot

Read More »

LLiMa: Real-time Edge Generative AI Under 10W, Built for You

This blog post was originally published at SiMa.ai’s website. It is reprinted here with the permission of SiMa.ai. LLiMa represents a paradigm shift in physical AI deployment that fundamentally changes how enterprises approach GenAI integration, enabling real Physical AI. While competitors typically offer pre-optimized models that were manually tuned for specific hardware configurations, LLiMa takes

Read More »

“Using Computer Vision for Early Detection of Cognitive Decline via Sleep-wake Data,” a Presentation from AI Tensors

Ravi Kota, CEO of AI Tensors, presents the “Using Computer Vision for Early Detection of Cognitive Decline via Sleep-wake Data” tutorial at the May 2025 Embedded Vision Summit. AITCare-Vision predicts cognitive decline by analyzing sleep-wake disorders data in older adults. Using computer vision and motion sensors coupled with AI algorithms,… “Using Computer Vision for Early

Read More »

How Synthetic Datasets are Revolutionizing AI Training Across Industries

This blog post was originally published at Geisel Software’s Symage website. It is reprinted here with the permission of Geisel Software. Synthetic data is becoming increasingly integral to AI and analytics, with many projects now incorporating these datasets. While synthetic data generated using generative AI techniques offers valuable insights, simulation-based synthetic datasets enhance this process

Read More »

“AI-powered Scouting: Democratizing Talent Discovery in Sports,” a Presentation from ai.io

Jonathan Lee, Chief Product Officer at ai.io, presents the “AI-powered Scouting: Democratizing Talent Discovery in Sports,” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Lee shares his experience using AI and computer vision to revolutionize talent identification in sports. By developing aiScout, a platform that enables athletes… “AI-powered Scouting: Democratizing Talent Discovery

Read More »

Capgemini Leverages Qualcomm Dragonwing Portfolio to Enhance Railway Monitoring with Edge AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. AI device powered by Qualcomm Dragonwing boosts productivity and reduces cloud dependence in Capgemini’s monitoring application for grade crossings Capgemini moved from their previous hardware solution to an edge AI device powered by the Qualcomm® Dragonwing™ QCS6490

Read More »

“Vision-based Aircraft Functions for Autonomous Flight Systems,” a Presentation from Acubed (an Airbus Innovation Center)

Arne Stoschek, Vice President of AI and Autonomy at Acubed (an Airbus innovation center), presents the “Vision-based Aircraft Functions for Autonomous Flight Systems” tutorial at the May 2025 Embedded Vision Summit. At Acubed, an Airbus innovation center, the mission is to accelerate AI and autonomy in aerospace. Stoschek gives an… “Vision-based Aircraft Functions for Autonomous

Read More »

“Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?” Expert Panel at the May 2025 Embedded Vision Summit. Other panelists include Chen Wu, Director and Head of Perception at Waymo, Vikas Bhardwaj, Director of AI in the Reality… “Edge AI and Vision at

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top