fbpx

“A New AI Platform Architecture for the Smart Toys of the Future,” a Presentation from Xperi

Gabriel Costache, Senior R&D Director at Xperi, presents the “New AI Platform Architecture for the Smart Toys of the Future” tutorial at the May 2022 Embedded Vision Summit.

From a parent’s perspective, toys should be safe, private, entertaining and educational, with the ability to adapt and grow with the child. For natural interaction, a toy must see, hear, feel and speak in a human-like manner. Thanks to AI, we can now deliver near-human accuracy on computer vision, speech recognition, speech synthesis and other human interaction tasks. However, these technologies require very high computation performance, making them difficult to implement at the edge with today’s typical hardware.

Cloud computing is not attractive for toys, due to privacy risks and the importance of low latency for human-like interaction. Xperi has developed a dedicated platform capable of executing multiple AI-based tasks in parallel at the edge with very low power and size requirements, enabling toys to incorporate sophisticated AI-based perception and communication. In this talk, Costache introduces this platform, which includes all of the hardware components required for next-generation toys.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top