Algorithms & Models

Texas Instruments Demonstration of Edge AI Inference and Video Streaming Over Wi-Fi

The demonstration shows how to use Texas Instruments’ AM6xA to capture live video, perform machine learning, and stream video over Wi-Fi. The video is encoded with H.264/H.265, and streamed via UDP over Wi-Fi using the CC33xx. At the receiver side, the video is decoded and displayed on a screen.  The receiver side could be a […]

Texas Instruments Demonstration of Edge AI Inference and Video Streaming Over Wi-Fi Read More +

“Introduction to Data Types for AI: Trade-offs and Trends,” a Presentation from Synopsys

Joep Boonstra, Synopsys Scientist at Synopsys, presents the “Introduction to Data Types for AI: Trade-offs and Trends” tutorial at the May 2025 Embedded Vision Summit. The increasing complexity of AI models has led to a growing need for efficient data storage and processing. One critical way to gain efficiency is… “Introduction to Data Types for

“Introduction to Data Types for AI: Trade-offs and Trends,” a Presentation from Synopsys Read More +

Machine Vision Defect Detection: Edge AI Processing with Texas Instruments AM6xA Arm-based Processors

Texas Instruments’ portfolio of AM6xA Arm-based processors are designed to advance intelligence at the edge using high resolution camera support, an integrated image sensor processor and deep learning accelerator. This video demonstrates using AM62A to run a vision-based artificial intelligence model for defect detection for manufacturing applications. Watch the model test the produced units as

Machine Vision Defect Detection: Edge AI Processing with Texas Instruments AM6xA Arm-based Processors Read More +

“Introduction to Radar and Its Use for Machine Perception,” a Presentation from Cadence

Amol Borkar, Product Marketing Director, and Vencatesh Subramanian, Design Engineering Architect, both of Cadence, co-present the “Introduction to Radar and Its Use for Machine Perception” tutorial at the May 2025 Embedded Vision Summit. Radar is a proven technology with a long history in various market segments and continues to play an increasingly important role in

“Introduction to Radar and Its Use for Machine Perception,” a Presentation from Cadence Read More +

Optimizing LLMs for Performance and Accuracy with Post-training Quantization

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Quantization is a core tool for developers aiming to improve inference performance with minimal overhead. It delivers significant gains in latency, throughput, and memory efficiency by reducing model precision in a controlled way—without requiring retraining. Today, most models

Optimizing LLMs for Performance and Accuracy with Post-training Quantization Read More +

Alif Semiconductor Demonstration of Face Detection and Driver Monitoring On a Battery, at the Edge

Alexandra Kazerounian, Senior Product Marketing Manager at Alif Semiconductor, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Kazerounian demonstrates how AI/ML workloads can run directly on her company’s ultra-low-power Ensemble and Balletto 32-bit microcontrollers. Watch as the AI/ML AppKit runs real-time face detection using an

Alif Semiconductor Demonstration of Face Detection and Driver Monitoring On a Battery, at the Edge Read More +

Inuitive Demonstration of On-camera SLAM, Depth and AI Using a NU4X00-based Sensor Module

Shay Harel, Field Application Engineer at Inuitive, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Harel demonstrates one of several examples his company presented at the Summit, highlighting the capabilities of its latest vision-on-chip technology. In this demo, the NU4X00 processor performs depth sensing, object

Inuitive Demonstration of On-camera SLAM, Depth and AI Using a NU4X00-based Sensor Module Read More +

Nota AI Demonstration of Nota Vision Agent, Next-generation Video Monitoring at the Edge

Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Kim demonstrates Nota Vision Agent—a next-generation video monitoring solution powered by Vision Language Models (VLMs). The solution delivers real-time analytics and intelligent alerts across critical domains including industrial safety,

Nota AI Demonstration of Nota Vision Agent, Next-generation Video Monitoring at the Edge Read More +

Nota AI Demonstration of NetsPresso Optimization Studio, Streamlined with Visual Insights

Tairen Piao, Research Engineer at Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Piao demonstrates NetsPresso Optimization Studio, the latest enhancement to Nota AI’s model optimization platform, NetsPresso. This intuitive interface simplifies the AI optimization process with advanced layer-wise analysis and automated quantization.

Nota AI Demonstration of NetsPresso Optimization Studio, Streamlined with Visual Insights Read More +

How to Run Coding Assistants for Free on RTX AI PCs and Workstations

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. AI-powered copilots deliver real-time assistance for projects from academic projects to production code — and are optimized for RTX AI PCs. Coding assistants or copilots — AI-powered assistants that can suggest, explain and debug code — are

How to Run Coding Assistants for Free on RTX AI PCs and Workstations Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top