Webinar Explores Building Scalable Edge ML Solutions Using NXP Processors and Amazon Services

Now available for on-demand viewing is the archive recording of the webinar “Building Scalable Edge ML Solutions with NXP i.MX Applications Processors and AWS Cloud Services,” co-presented by Ali Osman Örs, Director of AI ML Strategy and Technologies for Edge Processing at NXP Semiconductors, and Jack Ogawa, responsible for Strategic Semiconductor Partnerships for IoT at Amazon Web Services. From the event page:

Machine Learning (ML) technology is fast becoming ubiquitous as the heart of differentiated IoT devices, often defining the smart capabilities delivered. ML model monitoring, maintenance and updatability are essential to ensure that these smart capabilities continue to deliver the value promised throughout the lifecycle of the device, and require an MLOps strategy tailored for IoT scale.

In this session, NXP and AWS explore how to build and deploy ML solutions to many edge devices at scale and securely support MLOps for maintaining models through their lifecycle. The audience will learn how to address common MLOps challenges in the context of an IoT application running across multiple embedded devices.

What You Will Learn:

  • Building ML solutions with AWS cloud services such as Amazon SageMaker, AWS IoT Greengrass
  • Integrating cloud services with edge inference solutions from NXP
  • Deploying to an edge i.MX 8M Plus device based board
  • Managing a multi device fleet with Amazon SageMaker Edge Manager

For more information and to register, please see the event page.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top