fbpx

What Changed From VCRs to Software-defined Vision?

The Edge AI and Vision Alliance’s Jeff Bier discusses how advances in embedded systems’ basic building blocks are fundamentally changing their nature.

The Ojo-Yoshida Report is launching the second in our Dig Deeper video podcast series, this issue focusing on “Embedded Basics.”

Our guest for the inaugural episode is Jeff Bier, founder, Edge AI and Embedded Vision Alliance.

Watch our conversation as Bier walks us through examples of “sensor-intensive AI-enabled embedded systems.”

At the Ojo-Yoshida Report, we’ve watched traditional “embedded systems” morphed, over the last decade, into IoT. Lately, the buzz is all about “edge AI” devices.

In this series, we’ll be pulling back the lens for a broader view on the evolution of the “Embedded System” as we seek to understand what remains constant, what has changed, and what “useful” applications are coming down the pike.

From VCRs to software-defined vision, Bier defines the “embedded” in embedded systems.  Is embedded a system with a computer inside but a user interface that depends on neither  keyboard nor mouse? Is embedded a single-task machine? Where and how AI does meet embedded?

Ticking off the key building blocks of embedded systems today, Bier explains how these pieces alter the very nature of their systems.

Junko Yoshida
Editor in Chief, The Ojo-Yoshida Report


This article was published by the The Ojo-Yoshida Report. For more in-depth analysis, register today and get a free two-month all-access subscription.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top