fbpx

“Unifying Computer Vision and Natural Language Understanding for Autonomous Systems,” a Presentation from Verizon

Mumtaz Vauhkonen, Lead Distinguished Scientist and Head of Computer Vision for Cognitive AI in AI&D at Verizon, presents the “Unifying Computer Vision and Natural Language Understanding for Autonomous Systems” tutorial at the May 2022 Embedded Vision Summit. As the applications of autonomous systems expand, many such systems need the ability… “Unifying Computer Vision and Natural …

“Unifying Computer Vision and Natural Language Understanding for Autonomous Systems,” a Presentation from Verizon Read More +

“Compound CNNs for Improved Classification Accuracy,” a Presentation from Southern Illinois University Carbondale

Spyros Tragoudas, Professor and School Director of Southern Illinois University Carbondale, presents the “Compound CNNs for Improved Classification Accuracy” tutorial at the May 2022 Embedded Vision Summit. In this talk, Tragoudas presents a novel approach to improving the accuracy of convolutional neural networks (CNNs) used for classification. The approach utilizes… “Compound CNNs for Improved Classification …

“Compound CNNs for Improved Classification Accuracy,” a Presentation from Southern Illinois University Carbondale Read More +

“Strategies and Methods for Sensor Fusion,” a Presentation from Sensor Cortek

Robert Laganiere, CEO of Sensor Cortek, presents the “Strategies and Methods for Sensor Fusion” tutorial at the May 2022 Embedded Vision Summit. Highly autonomous machines require advanced perception capabilities. Autonomous machines are generally equipped with three main sensor types: cameras, lidar and radar. The intrinsic limitations of each sensor affect… “Strategies and Methods for Sensor …

“Strategies and Methods for Sensor Fusion,” a Presentation from Sensor Cortek Read More +

“Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments,” a Presentation from Observa

Erik Chelstad, CTO and Co-founder of Observa, presents the “Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments” tutorial at the May 2022 Embedded Vision Summit. In many computer vision applications, a key challenge is maintaining accuracy when the real world is changing. In this presentation, Chelstad explores… “Incorporating Continuous User Feedback to …

“Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments,” a Presentation from Observa Read More +

“A Cost-Effective Approach to Modeling Object Interactions on the Edge,” a Presentation from Nemo @ Ridecell

Arun Kumar, Perception Engineer at Nemo @ Ridecell, presents the “Cost-Effective Approach to Modeling Object Interactions on the Edge” tutorial at the May 2022 Embedded Vision Summit. Determining bird’s eye view (BEV) object positions and tracks, and modeling the interactions among objects, is vital for many applications, including understanding human… “A Cost-Effective Approach to Modeling …

“A Cost-Effective Approach to Modeling Object Interactions on the Edge,” a Presentation from Nemo @ Ridecell Read More +

“COVID-19 Safe Distancing Measures in Public Spaces with Edge AI,” a Presentation from the Government Technology Agency of Singapore

Ebi Jose, Senior Systems Engineer at GovTech, the Government Technology Agency of Singapore, presents the “COVID-19 Safe Distancing Measures in Public Spaces with Edge AI” tutorial at the May 2022 Embedded Vision Summit. Whether in indoor environments, such as supermarkets, museums and offices, or outdoor environments, such as parks, maintaining… “COVID-19 Safe Distancing Measures in …

“COVID-19 Safe Distancing Measures in Public Spaces with Edge AI,” a Presentation from the Government Technology Agency of Singapore Read More +

“Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding,” a Presentation from Google

Zizhao Zhang, Staff Research Software Engineer and Tech Lead for Cloud AI Research at Google, presents the “Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding” tutorial at the May 2022 Embedded Vision Summit. In computer vision, hierarchical structures are popular in vision transformers (ViT). In this talk, Zhang… “Nested Hierarchical Transformer: Towards Accurate, …

“Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding,” a Presentation from Google Read More +

Edge AI and Vision Insights: September 14, 2022 Edition

EMBEDDED DEEP LEARNING INFERENCE TensorFlow Lite for Microcontrollers: Recent Developments TensorFlow Lite Micro (TFLM) is a generic inference framework designed to run TensorFlow models on digital signal processors (DSPs), microcontrollers and other embedded targets with small memory footprints and very low power usage. TFLM aims to be easily portable to various embedded targets, from those …

Edge AI and Vision Insights: September 14, 2022 Edition Read More +

“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Learned,” a Presentation from Fiddler AI

Krishnaram Kenthapadi, Chief Scientist at Fiddler AI, presents the “Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Learned” tutorial at the May 2022 Embedded Vision Summit. How do we develop machine learning models and systems taking fairness, explainability and privacy into account? How do we operationalize models in… “Responsible AI and ModelOps in …

“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Learned,” a Presentation from Fiddler AI Read More +

“Comparing ML-Based Audio with ML-Based Vision: An Introduction to ML Audio for ML Vision Engineers,” a Presentation from DSP Concepts

Josh Morris, Engineering Manager at DSP Concepts, presents the “Comparing ML-Based Audio with ML-Based Vision: An Introduction to ML Audio for ML Vision Engineers” tutorial at the May 2022 Embedded Vision Summit. As embedded processors become more powerful, our ability to implement complex machine learning solutions at the edge is… “Comparing ML-Based Audio with ML-Based …

“Comparing ML-Based Audio with ML-Based Vision: An Introduction to ML Audio for ML Vision Engineers,” a Presentation from DSP Concepts Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top