Multimodal

“Enabling Ego Vision Applications on Smart Eyewear Devices,” a Presentation from EssilorLuxottica

Francesca Palermo, Research Principal Investigator at EssilorLuxottica, presents the “Enabling Ego Vision Applications on Smart Eyewear Devices” tutorial at the May 2025 Embedded Vision Summit. Ego vision technology is revolutionizing the capabilities of smart eyewear, enabling applications that understand user actions, estimate human pose and provide spatial awareness through simultaneous… “Enabling Ego Vision Applications on […]

“Enabling Ego Vision Applications on Smart Eyewear Devices,” a Presentation from EssilorLuxottica Read More +

LLiMa: SiMa.ai’s Automated Code Generation Framework for LLMs and VLMs for <10W

This blog post was originally published at SiMa.ai’s website. It is reprinted here with the permission of SiMa.ai. In our blog post titled “Implementing Multimodal GenAI Models on Modalix”, we describe how SiMa.ai’s MLSoC Modalix enables Generative AI models to be implemented for Physical AI applications with low latency and low power consumption.  We implemented

LLiMa: SiMa.ai’s Automated Code Generation Framework for LLMs and VLMs for <10W Read More +

“Improving Worksite Safety with AI-powered Perception,” a Presentation from Arcure

Sabri Bayoudh, Chief Innovation Officer at Arcure, presents the “Improving Worksite Safety with AI-powered Perception” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Bayoudhl explores how embedded vision is being used in industrial applications, including vehicle safety and production. He highlights some of the challenging requirements of… “Improving Worksite Safety with AI-powered

“Improving Worksite Safety with AI-powered Perception,” a Presentation from Arcure Read More +

“Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?” Expert Panel at the May 2025 Embedded Vision Summit. Other panelists include Chen Wu, Director and Head of Perception at Waymo, Vikas Bhardwaj, Director of AI in the Reality… “Edge AI and Vision at

“Edge AI and Vision at Scale: What’s Real, What’s Next, What’s Missing?,” An Embedded Vision Summit Expert Panel Discussion Read More +

“A View From the 2025 Embedded Vision Summit (Part 2),” a Presentation from the Edge AI and Vision Alliance

Jeff Bier, Founder of the Edge AI and Vision Alliance, welcomes attendees to the May 2025 Embedded Vision Summit on May 22, 2025. Bier provides an overview of the edge AI and vision market opportunities, challenges, solutions and trends. He also introduces the Edge AI and Vision Alliance and the… “A View From the 2025

“A View From the 2025 Embedded Vision Summit (Part 2),” a Presentation from the Edge AI and Vision Alliance Read More +

“A View From the 2025 Embedded Vision Summit (Part 1),” a Presentation from the Edge AI and Vision Alliance

Jeff Bier, Founder of the Edge AI and Vision Alliance, welcomes attendees to the May 2025 Embedded Vision Summit on May 21, 2025. Bier provides an overview of the edge AI and vision market opportunities, challenges, solutions and trends. He also introduces the Edge AI and Vision Alliance and the… “A View From the 2025

“A View From the 2025 Embedded Vision Summit (Part 1),” a Presentation from the Edge AI and Vision Alliance Read More +

NVIDIA Blackwell-powered Jetson Thor Now Available, Accelerating the Age of General Robotics

News Summary: NVIDIA Jetson AGX Thor developer kit and production modules, robotics computers designed for physical AI and robotics, are now generally available. Over 2 million developers are using NVIDIA’s robotics stack, with Agility Robotics, Amazon Robotics, Boston Dynamics, Caterpillar, Figure, Hexagon, Medtronic and Meta among early Jetson Thor adopters. Jetson Thor, powered by NVIDIA

NVIDIA Blackwell-powered Jetson Thor Now Available, Accelerating the Age of General Robotics Read More +

“The Future of Visual AI: Efficient Multimodal Intelligence,” a Keynote Presentation from Trevor Darrell

Trevor Darrell, Professor at the University of California, Berkeley, presents the “Future of Visual AI: Efficient Multimodal Intelligence” tutorial at the May 2025 Embedded Vision Summit. AI is on the cusp of a revolution, driven by the convergence of several breakthroughs. One of the most significant of these advances is… “The Future of Visual AI:

“The Future of Visual AI: Efficient Multimodal Intelligence,” a Keynote Presentation from Trevor Darrell Read More +

Maximize Robotics Performance by Post-training NVIDIA Cosmos Reason

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. First unveiled at NVIDIA GTC 2025, NVIDIA Cosmos Reason is an open and fully customizable reasoning vision language model (VLM) for physical AI and robotics. The VLM enables robots and vision AI agents to reason using prior

Maximize Robotics Performance by Post-training NVIDIA Cosmos Reason Read More +

Implementing Multimodal GenAI Models on Modalix

This blog post was originally published at SiMa.ai’s website. It is reprinted here with the permission of SiMa.ai. It has been our goal since starting SiMa.ai to create one software and hardware platform for the embedded edge that empowers companies to make their AI/ML innovations come to life. With the rise of Generative AI already

Implementing Multimodal GenAI Models on Modalix Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top