fbpx

This blog post was originally published at Edge Impulse’s website. It is reprinted here with the permission of Edge Impulse.

We can’t wait to see everyone in Santa Clara at the 2022 Embedded Vision Summit. The four-day event, running May 16th–19th, brings together the biggest names in computer vision and visual AI, for talks, workshops, exhibitions, and more.

Edge Impulse will be highly active at the show, with a keynote from CEO Zach Shelby (Tuesday, 10:40am PT), a run-through of FOMO, our new ultra-light object detection algorithm by CTO Jan Jongboom (Wednesday, 10:50am PT), and a “deep dive” session that runs from 9am to noon on Thursday. (More about that below).

We’ve also got a fantastic booth (seriously, make sure to stop by to see it — space #407) and will be talking shop on the show floor throughout the event. We’ll be showing tech demos ranging from low-res people counting to smart wildlife cameras. Various partners will be joining us in the booth to demo their products and show how enterprises can use them with the Edge Impulse platform to further their own capabilities. Throughout the event, we’ll be hosting representatives from:

  • Sony
  • Renesas
  • Synaptics
  • Alif Semiconductor 
  • BrainChip
  • SiLabs
  • Himax

If you haven’t grabbed a pass yet, get one now. And then sign up for a meeting time to connect with our team on site. We’re excited to show you how you can use Edge Impulse with your next product.

Deep dive workshop details:

Workshop #1 – FOMO: Real-Time Object Detection on Low-Power Microcontrollers
(~ 60 Minutes)

Edge Impulse FOMO (Faster Objects, More Objects) is a novel machine learning algorithm that brings object detection to highly constrained devices. It lets you count objects, find the location of objects in an image, and track multiple objects in real-time using up to 30x less processing power and memory than MobileNet SSD or YOLOv5. In this exercise, attendees will learn how to collect a high-quality object detection dataset to train and deploy a FOMO model to a microcontroller like the Arduino Portenta H7 + Vision shield.

Workshop #2 – Pose Classification: Multi-Stage Inference in an Embedded Device
(~ 60 Minutes)

Construct a multi-stage machine learning pipeline that captures image data and use the TensorFlow Pose Estimation model to identify joint locations on a human body and classify poses using those features. To accomplish this project, we will wrap the pose estimation model in a custom Edge Impulse block so that both the pose estimation and classification models can be easily deployed to an embedded device after training.

Mike Senese
Director of Content Marketing, Edge Impulse

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top