Featured

On-Device LLMs in 2026: What Changed, What Matters, What’s Next

In On-Device LLMs: State of the Union, 2026, Vikas Chandra and Raghuraman Krishnamoorthi explain why running LLMs on phones has moved from novelty to practical engineering, and why the biggest breakthroughs came not from faster chips but from rethinking how models are built, trained, compressed, and deployed. Why run LLMs locally? Four reasons: latency (cloud […]

On-Device LLMs in 2026: What Changed, What Matters, What’s Next Read More +

Free Webinar Highlights Compelling Advantages of FPGAs

On March 17, 2026 at 9 am PT (noon ET), Efinix’s Mark Oliver, VP of Marketing and Business Development, will present the free hour webinar “Why your Next AI Accelerator Should Be an FPGA,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration page: Edge AI system developers often

Free Webinar Highlights Compelling Advantages of FPGAs Read More +

When DRAM Becomes the Bottleneck (Again): What the 2026 Memory Squeeze Means for Edge AI

A funny thing is happening in the edge AI world: some of the most important product decisions you’ll make this year won’t be about TOPS, sensor resolution, or which transformer variant to deploy. They’ll be about memory—how much you can get, how much it costs, and whether you can ship the exact part you designed

When DRAM Becomes the Bottleneck (Again): What the 2026 Memory Squeeze Means for Edge AI Read More +

Top 3 System Patterns Gemini 3 Pro Vision Unlocks for Edge Teams

For those who missed it in the holiday haze, Google’s Gemini 3 Pro launched on December 5th, but the push on vision isn’t just “better VQA.” Google frames it as a jump from recognition to visual + spatial reasoning, spanning documents, spatial, screens, and video. If you’re building edge AI products, that matters less as

Top 3 System Patterns Gemini 3 Pro Vision Unlocks for Edge Teams Read More +

“Sensors and Compute Needs and Challenges for Humanoid Robots,” a Presentation from Agility Robotics

Vlad Branzoi, Perception Sensors Team Lead at Agility Robotics, presents the “Sensors and Compute Needs and Challenges for Humanoid Robots” tutorial at the September 2025 Edge AI and Vision Innovation Forum.

“Sensors and Compute Needs and Challenges for Humanoid Robots,” a Presentation from Agility Robotics Read More +

“Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?” Expert Panel at the May 2025 Embedded Vision Summit. Other panelists include Chen Wu, Director and Head of Perception at Waymo, Vikas Bhardwaj, Director of AI in the Reality Labs at Meta, Vaibhav Ghadiok,

“Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?,” An Embedded Vision Summit Expert Panel Discussion Read More +

“The Future of Visual AI: Efficient Multimodal Intelligence,” a Keynote Presentation from Trevor Darrell

Trevor Darrell, Professor at the University of California, Berkeley, presents the “Future of Visual AI: Efficient Multimodal Intelligence” tutorial at the May 2025 Embedded Vision Summit. AI is on the cusp of a revolution, driven by the convergence of several breakthroughs. One of the most significant of these advances is the development of large language

“The Future of Visual AI: Efficient Multimodal Intelligence,” a Keynote Presentation from Trevor Darrell Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top