fbpx

Summit 2022

Edge Impulse Demonstration of Screw and Washer Detection with FOMO

Jenny Plunkett, Senior Developer Relations Engineer at Edge Impulse, demonstrates the company’s latest edge AI and vision technologies and products at the 2022 Embedded Vision Summit. Specifically, Plunkett demonstrates Edge Impulse’s FOMO algorithm. In this demo, Jenny runs a FOMO model on the Himax WE-I Plus board for real-time screw and washer detection along a […]

Edge Impulse Demonstration of Screw and Washer Detection with FOMO Read More +

Edge Impulse Demonstration of Face Detection with FOMO and Alif Semiconductor’s Ensemble MCU

Shawn Hymel, Developer Relations Engineer at Edge Impulse, demonstrates the company’s latest edge AI and vision technologies and products at the 2022 Embedded Vision Summit. Specifically, Hymel demonstrates how to use Edge Impulse’s ground-breaking FOMO algorithm for real-time face detection. The demo runs live inference on an Alif Semiconductor Ensemble board, which combines an Arm

Edge Impulse Demonstration of Face Detection with FOMO and Alif Semiconductor’s Ensemble MCU Read More +

“Unifying Computer Vision and Natural Language Understanding for Autonomous Systems,” a Presentation from Verizon

Mumtaz Vauhkonen, Lead Distinguished Scientist and Head of Computer Vision for Cognitive AI in AI&D at Verizon, presents the “Unifying Computer Vision and Natural Language Understanding for Autonomous Systems” tutorial at the May 2022 Embedded Vision Summit. As the applications of autonomous systems expand, many such systems need the ability to perceive using both vision

“Unifying Computer Vision and Natural Language Understanding for Autonomous Systems,” a Presentation from Verizon Read More +

“Compound CNNs for Improved Classification Accuracy,” a Presentation from Southern Illinois University Carbondale

Spyros Tragoudas, Professor and School Director of Southern Illinois University Carbondale, presents the “Compound CNNs for Improved Classification Accuracy” tutorial at the May 2022 Embedded Vision Summit. In this talk, Tragoudas presents a novel approach to improving the accuracy of convolutional neural networks (CNNs) used for classification. The approach utilizes the confusion matrix of the

“Compound CNNs for Improved Classification Accuracy,” a Presentation from Southern Illinois University Carbondale Read More +

“Strategies and Methods for Sensor Fusion,” a Presentation from Sensor Cortek

Robert Laganiere, CEO of Sensor Cortek, presents the “Strategies and Methods for Sensor Fusion” tutorial at the May 2022 Embedded Vision Summit. Highly autonomous machines require advanced perception capabilities. Autonomous machines are generally equipped with three main sensor types: cameras, lidar and radar. The intrinsic limitations of each sensor affect the performance of the perception

“Strategies and Methods for Sensor Fusion,” a Presentation from Sensor Cortek Read More +

“Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments,” a Presentation from Observa

Erik Chelstad, CTO and Co-founder of Observa, presents the “Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments” tutorial at the May 2022 Embedded Vision Summit. In many computer vision applications, a key challenge is maintaining accuracy when the real world is changing. In this presentation, Chelstad explores techniques for designing hardware and

“Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments,” a Presentation from Observa Read More +

“A Cost-Effective Approach to Modeling Object Interactions on the Edge,” a Presentation from Nemo @ Ridecell

Arun Kumar, Perception Engineer at Nemo @ Ridecell, presents the “Cost-Effective Approach to Modeling Object Interactions on the Edge” tutorial at the May 2022 Embedded Vision Summit. Determining bird’s eye view (BEV) object positions and tracks, and modeling the interactions among objects, is vital for many applications, including understanding human interactions for security and road

“A Cost-Effective Approach to Modeling Object Interactions on the Edge,” a Presentation from Nemo @ Ridecell Read More +

“COVID-19 Safe Distancing Measures in Public Spaces with Edge AI,” a Presentation from the Government Technology Agency of Singapore

Ebi Jose, Senior Systems Engineer at GovTech, the Government Technology Agency of Singapore, presents the “COVID-19 Safe Distancing Measures in Public Spaces with Edge AI” tutorial at the May 2022 Embedded Vision Summit. Whether in indoor environments, such as supermarkets, museums and offices, or outdoor environments, such as parks, maintaining safe social distancing has been

“COVID-19 Safe Distancing Measures in Public Spaces with Edge AI,” a Presentation from the Government Technology Agency of Singapore Read More +

“Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding,” a Presentation from Google

Zizhao Zhang, Staff Research Software Engineer and Tech Lead for Cloud AI Research at Google, presents the “Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding” tutorial at the May 2022 Embedded Vision Summit. In computer vision, hierarchical structures are popular in vision transformers (ViT). In this talk, Zhang presents a novel idea of

“Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding,” a Presentation from Google Read More +

“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Learned,” a Presentation from Fiddler AI

Krishnaram Kenthapadi, Chief Scientist at Fiddler AI, presents the “Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Learned” tutorial at the May 2022 Embedded Vision Summit. How do we develop machine learning models and systems taking fairness, explainability and privacy into account? How do we operationalize models in production, and address their governance,

“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Learned,” a Presentation from Fiddler AI Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top