Algorithms & Models

Optimizing LLMs for Performance and Accuracy with Post-training Quantization

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Quantization is a core tool for developers aiming to improve inference performance with minimal overhead. It delivers significant gains in latency, throughput, and memory efficiency by reducing model precision in a controlled way—without requiring retraining. Today, most models […]

Optimizing LLMs for Performance and Accuracy with Post-training Quantization Read More +

Alif Semiconductor Demonstration of Face Detection and Driver Monitoring On a Battery, at the Edge

Alexandra Kazerounian, Senior Product Marketing Manager at Alif Semiconductor, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Kazerounian demonstrates how AI/ML workloads can run directly on her company’s ultra-low-power Ensemble and Balletto 32-bit microcontrollers. Watch as the AI/ML AppKit runs real-time face detection using an

Alif Semiconductor Demonstration of Face Detection and Driver Monitoring On a Battery, at the Edge Read More +

Inuitive Demonstration of On-camera SLAM, Depth and AI Using a NU4X00-based Sensor Module

Shay Harel, Field Application Engineer at Inuitive, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Harel demonstrates one of several examples his company presented at the Summit, highlighting the capabilities of its latest vision-on-chip technology. In this demo, the NU4X00 processor performs depth sensing, object

Inuitive Demonstration of On-camera SLAM, Depth and AI Using a NU4X00-based Sensor Module Read More +

Nota AI Demonstration of Nota Vision Agent, Next-generation Video Monitoring at the Edge

Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Kim demonstrates Nota Vision Agent—a next-generation video monitoring solution powered by Vision Language Models (VLMs). The solution delivers real-time analytics and intelligent alerts across critical domains including industrial safety,

Nota AI Demonstration of Nota Vision Agent, Next-generation Video Monitoring at the Edge Read More +

Nota AI Demonstration of NetsPresso Optimization Studio, Streamlined with Visual Insights

Tairen Piao, Research Engineer at Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Piao demonstrates NetsPresso Optimization Studio, the latest enhancement to Nota AI’s model optimization platform, NetsPresso. This intuitive interface simplifies the AI optimization process with advanced layer-wise analysis and automated quantization.

Nota AI Demonstration of NetsPresso Optimization Studio, Streamlined with Visual Insights Read More +

How to Run Coding Assistants for Free on RTX AI PCs and Workstations

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. AI-powered copilots deliver real-time assistance for projects from academic projects to production code — and are optimized for RTX AI PCs. Coding assistants or copilots — AI-powered assistants that can suggest, explain and debug code — are

How to Run Coding Assistants for Free on RTX AI PCs and Workstations Read More +

Microchip Technology Demonstration of Real-time Object and Facial Recognition with Edge AI Platforms

Swapna Guramani, Applications Engineer for Microchip Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Guramani demonstrates her company’s latest AI/ML capabilities in action: real-time object recognition using the SAMA7G54 32-bit MPU running Edge Impulse’s FOMO model, and facial recognition powered by TensorFlow Lite’s Mobile

Microchip Technology Demonstration of Real-time Object and Facial Recognition with Edge AI Platforms Read More +

Is End-to-end the Endgame for Level 4 Autonomy?

Examples of modular, end-to-end, and hybrid software architectures deployed in autonomous vehicles. Autonomous vehicle technology has evolved significantly over the past year. The two market leaders, Waymo and Apollo Go, both have fleets of over 1,000 vehicles and operate in multiple cities, and a mix of large companies such as Nvidia and Aptiv, OEMs such

Is End-to-end the Endgame for Level 4 Autonomy? Read More +

Microchip Technology Demonstration of AI-powered Face ID on the Polarfire SoC FPGA Using the Vectorblox SDK

Avery Williams, Channel Marketing Manager for Microchip Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Williams demonstrates ultra-efficient AI-powered facial recognition on Microchip’s PolarFire SoC FPGA using the VectorBlox Accelerator SDK. Pre-trained neural networks are quantized to INT8 and compiled to run directly on

Microchip Technology Demonstration of AI-powered Face ID on the Polarfire SoC FPGA Using the Vectorblox SDK Read More +

How to Think About Large Language Models on the Edge

This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip. ChatGPT was released to the public on November 30th, 2022, and the world – at least, the connected world – has not been the same since. Surprisingly, almost three years later, despite massive adoption, we do not

How to Think About Large Language Models on the Edge Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top