Brian Dipert

Edge AI and Vision Insights: May 28, 2025

LETTER FROM THE EDITOR Dear Colleague, Last week’s Embedded Vision Summit was a resounding success, with more than 1,200 attendees learning from more than 85 presentations, more than 65 exhibitors and hundreds of demos, as well as making valuable connections. 2025 Embedded Vision Summit presentation videos and slide decks will become available on the Embedded […]

Edge AI and Vision Insights: May 28, 2025 Read More +

How Embedded Vision Is Shaping the Next Generation of Autonomous Mobile Robots

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Autonomous Mobile Robots (AMRs) are being deployed across industries, from warehouses and hospitals to logistics and retail, thanks to embedded vision systems. See how cameras are integrated into AMRs so that they can quickly and

How Embedded Vision Is Shaping the Next Generation of Autonomous Mobile Robots Read More +

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The age of video analytics AI agents is here. Video is one of the defining features of the modern digital landscape, accounting for over 50% of all global data traffic. Dominant in media and increasingly important for

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries Read More +

Nota AI Demonstrates On-device AI Breakthrough at Embedded Vision Summit 2025 in Collaboration with Qualcomm AI Hub

NetsPresso® and Qualcomm AI Hub: Strategic Integration Streamlines Edge AI Development Generative AI solutions drive global expansion momentum ahead of IPO listing SEOUL, South Korea, May 26, 2025 /PRNewswire/ — Nota AI, a global leader in AI optimization, showcased its latest edge AI innovations alongside Qualcomm Technologies, Inc. at the Embedded Vision Summit 2025, held

Nota AI Demonstrates On-device AI Breakthrough at Embedded Vision Summit 2025 in Collaboration with Qualcomm AI Hub Read More +

Effortless Edge Deployment of AI Models with Digica’s AI SDK (Featuring ExecuTorch)

This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. Deploying AI models on mobile and embedded devices is a challenge that goes far beyond just converting a trained model. While frameworks like PyTorch offer a streamlined way to develop deep learning models, efficiently deploying them on

Effortless Edge Deployment of AI Models with Digica’s AI SDK (Featuring ExecuTorch) Read More +

NAMUGA Unveils Stella-2: Compact, Solid-state Lidar Powered by Lumotive, at Embedded Vision Summit

REDMOND, Wash., May 21, 2025 /PRNewswire/ — Lumotive, a leader in programmable optical semiconductor technology, today announced that NAMUGA Co., Ltd., a leading manufacturer of advanced camera modules, will debut its first solid-state 3D lidar sensor—Stella-2, powered by Lumotive’s Light Control Metasurface (LCM) technology—at the upcoming Embedded Vision Summit in California. Stella-2 brings software-defined intelligence,

NAMUGA Unveils Stella-2: Compact, Solid-state Lidar Powered by Lumotive, at Embedded Vision Summit Read More +

e-con Systems Showcased New Lattice FPGA-based Holoscan Camera Solutions at Computex Taipei and Embedded Vision Summit 2025

California & Chennai (May 22, 2025): e-con Systems, a leading provider of camera solutions for embedded vision applications, successfully showcased its new Holoscan camera solution based on the low power Lattice FPGA technology for NVIDIA® platforms at two major industry events in 2025: Computex Taipei and the Embedded Vision Summit (EVS) in Santa Clara, CA,

e-con Systems Showcased New Lattice FPGA-based Holoscan Camera Solutions at Computex Taipei and Embedded Vision Summit 2025 Read More +

Key Drone Terminology: A Quick Guide for Beginners

This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity. As drone technology becomes more accessible and widespread, it’s important to get familiar with the basic terms that define how drones work and how we control them. Whether you’re a hobbyist, a content

Key Drone Terminology: A Quick Guide for Beginners Read More +

Build & Deploy AI Vision Models for Windows on Snapdragon: OpenCV Live! Podcast

This blog post was originally published at EyePop.ai’s website. It is reprinted here with the permission of EyePop.ai. Want to deploy AI models to the edge without the cloud? On this special LIVE podcast hosted by OpenCV, Andy Ballester and Blythe Towal show how EyePop.ai is enabling real-time, offline inference with Snapdragon NPUs from Qualcomm.

Build & Deploy AI Vision Models for Windows on Snapdragon: OpenCV Live! Podcast Read More +

Upcoming Webinar Explores Ultra-high Speed Camera Interfaces

On May 28, 2025 at 10:00 am CET and 10:00 am PT (1:00 pm ET), Alliance Member company FRAMOS will deliver two sessions of the free webinar “Ultra-high Speed Image Sensors with SLVS-EC.” From the event page: High-resolution image sensors with high frame rates require a powerful interface for data transmission. That’s why Sony launched

Upcoming Webinar Explores Ultra-high Speed Camera Interfaces Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top