Software

Software for Embedded Vision

3LC Demonstration of Debugging YOLO with 3LC’s Training-time Truth Detector

Paul Endresen, CEO of 3LC, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Andresen demonstrates how to uncover hidden treasures in the COCO dataset – like unlabeled forks and phantom objects – using his platform’s training-time introspection tools. In this demo, 3LC eavesdrops on a

Read More »

VeriSilicon Demonstration of the Open Se Cura Project

Chris Wang, VP of Multimedia Technologies and a member of CTO office at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Wang demonstrates examples from the Open Se Cura Project, a joint effort between VeriSilicon and Google. The project showcases a scalable, power-efficient, and

Read More »

Synopsys Demonstration of Siengine’s AD1000 ADAS Chip, Powered by Synopsys NPX6 NPU IP

Gordon Cooper, Principal Product Manager at Synopsys, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Cooper demonstrates the powerful SiEngine AD1000 NPU and the robust toolchain including debugger, profiler, and simulator, which features Synopsys NPX6 NPU IP. Learn how the platform supports TensorFlow, ONNX, and

Read More »

Synopsys and Visionary.ai Demonstration of a Low-light Real-time AI Video Denoiser Tailored for NPX6 NPU IP

Gordon Cooper, Principal Product Manager at Synopsys, and David Jarmon, Senior VP of Worldwide Sales at Visionary.ai, demonstrates the companies’ latest edge AI and vision technologies and products in Synopsys’ booth at the 2025 Embedded Vision Summit. Specifically, Cooper and Jarmon demonstrate the future of low-light imaging with Visionary.ai’s cutting-edge real-time AI video denoiser. This

Read More »

SqueezeBits Demonstration of On-device LLM Inference, Running a 2.4B Parameter Model on the iPhone 14 Pro

Taesu Kim, CTO of SqueezeBits, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Kim demonstrates a 2.4-billion-parameter large language model (LLM) running entirely on an iPhone 14 Pro without server connectivity. The device operates in airplane mode, highlighting on-device inference using a hybrid approach that

Read More »

Synthetic Data for Computer Vision

This article was originally published at Synetic AI’s website. It is reprinted here with the permission of Synetic AI. Synthetic data is changing how computer vision models are being trained. This page will explain synthetic data and how it compares to traditional approaches. After exploring the main methods of creating synthetic data, we’ll help you

Read More »

Sony Semiconductor Demonstration of Its Open-source Edge AI Stack with the IMX500 Intelligent Sensor

JF Joly, Product Manager for the AITRIOS platform at Sony Semiconductor, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Joly demonstrates Sony’s fully open-source software stack that enables the creation of AI-powered cameras using the IMX500 intelligent vision sensor. In this demo, Joly illustrates how

Read More »

Edge AI Today: Real-world Use Cases for Developers

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Developers today face increasing pressure to deliver intelligent features with tighter timelines, constrained resources, and heightened expectations for privacy, performance, and accuracy. This article highlights real-world Edge AI applications already in production and mainstream use—providing actionable inspiration

Read More »

Autonomous Driving Software and AI in Automotive 2026-2046: Technologies, Markets, Players

For more information, visit https://www.idtechex.com/en/research-report/autonomous-driving-software-and-ai-in-automotive/1111. The global autonomous driving software market in 2046 will be greater than US$130 billion This report provides an analysis of the software market for ADAS and autonomous driving software. Topic coverage includes business models, hardware, and software paradigms and trends developing in the market for ADAS and autonomous driving. IDTechEx

Read More »

Improving Synthetic Data Augmentation and Human Action Recognition with SynthDa

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Human action recognition is a capability in AI systems designed for safety-critical applications, such as surveillance, eldercare, and industrial monitoring. However, many real-world datasets are limited by data imbalance, privacy constraints, or insufficient coverage of rare but

Read More »

Video Self-distillation for Single-image Encoders: Learning Temporal Priors from Unlabeled Video

This blog post was originally published at Nota AI’s website. It is reprinted here with the permission of Nota AI. Proposes a simple next-frame prediction task using unlabeled video to enhance single-image encoders. Injects 3D geometric and temporal priors into image-based models without requiring optical flow or object tracking. Outperforms state-of-the-art self-supervised methods like DoRA

Read More »

Comparing Synthetic Data Platforms: Synetic AI and NVIDIA Omniverse

This blog post was originally published at Synetic AI’s website. It is reprinted here with the permission of Synetic AI. This blog post compares Synetic AI and NVIDIA Omniverse for synthetic data generation, focusing on deployment-ready computer vision models. Whether you’re exploring simulation tools or evaluating dataset creation platforms, this guide outlines key differences and

Read More »

Andes Technology’s AutoOpTune Applies Genetic Algorithms to Accelerate RISC-V Software Optimization

Andes AutoOpTune™ v1.0 accelerates software development by giving software developers the ability to automatically explore and choose the compiler optimizations to achieve their performance and code-size targets. Hsinchu, Taiwan – July 10, 2025 – Andes Technology, a leading provider of high-efficiency, low-power 32/64-bit RISC-V processor cores and a Founding Premier member of RISC-V International, today

Read More »

Stereo ace for Precise 3D Images Even with Challenging Surfaces

The new high-resolution Basler Stereo ace complements Basler’s 3D product range with an easy-to-integrate series of active stereo cameras that are particularly suitable for logistics and factory automation. Ahrensburg, July 10, 2025 – Basler AG introduces the new active 3D stereo camera series Basler Stereo ace consisting of 6 camera models and thus strengthens its position as

Read More »

Optimizing Your AI Model for the Edge

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Key takeaways: We talk about five techniques—compiling to machine code, quantization, weight pruning, domain-specific fine-tuning, and training small models with larger models—that can be used to improve on-device AI model performance. Whether you think edge AI is

Read More »

Cadence Demonstration of a SWIN Shifted Window Vision Transform on a Tensilica Vision DSP-based Platform

Amol Borkar, Director of Product Marketing for Cadence Tensilica DSPs, presents the company’s latest edge AI and vision technologies at the 2025 Embedded Vision Summit. Specifically, Borkar demonstrates the use of the Tensilica Vision 230 (Q7) DSP for advanced AI and transformer applications. The Vision 230 DSP is a highly efficient, configurable, and extensible processor

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top