“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,” a Presentation from Gimlet Labs

Omid Azizi, Co-Founder of Gimlet Labs, presents the “Visual Search: Fine-grained Recognition with Embedding Models for the Edge” tutorial at the May 2025 Embedded Vision Summit.

In the domain of AI vision, we have seen an explosion of models that can reliably detect objects of various types, from people to license plates. While these models are impressive, in real-world applications we often need to differentiate between a large number of custom items. For example, in addition to knowing that there is a car, you may want to know the exact make and model of that car. For these sorts of tasks, what you really want is a visual search that can identify an object from a catalog without requiring a new model to be trained when categories are added.

In this talk, Azizi describes how embedding models can be used to perform a visual search in such applications. He presents how to use and fine-tune these models, including tips on how to train an embedding model such that new objects can be added without requiring retraining of the model.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top