Functions
scroll to learn more or view by subtopic
The listing below showcases the most recently published content associated with various AI and visual intelligence functions.
View all Posts
Endeavor Air Expands dentCHECK Use to Enhance the Quality and Efficiency of Dent-mapping Workflows
Endeavor Air has implemented dentCHECK at multiple bases to streamline dent-mapping and reporting workflows. Constance, Germany and Rancho Cucamonga, California – Aug 22, 2024 – “dentCHECK was the right device to expand our capabilities and advance Endeavor Air’s efforts of integrating more technology in our hangars,” said Bob Olson, Director of Quality and Training, Endeavor
Snapdragon Powers the Future of AI in Smart Glasses. Here’s How
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. A Snapdragon Insider chats with Qualcomm Technologies’ Said Bakadir about the future of smart glasses and Qualcomm Technologies’ role in turning it into a critical AI tool Artificial intelligence (AI) is increasingly winding its way through our
“An Introduction to Semantic Segmentation,” a Presentation from Au-Zone Technologies
Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Introduction to Semantic Segmentation” tutorial at the May 2024 Embedded Vision Summit. Vision applications often rely on object detectors, which determine the nature and location of objects in a scene. But many vision applications require a different… “An Introduction to Semantic Segmentation,”
“Augmenting Visual AI through Radar and Camera Fusion,” a Presentation from Au-Zone Technologies
Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Augmenting Visual AI through Radar and Camera Fusion” tutorial at the May 2024 Embedded Vision Summit. In this presentation Taylor discusses well-known limitations of camera-based AI and how radar can be leveraged to address these limitations. He… “Augmenting Visual AI through Radar
“Introduction to Visual Simultaneous Localization and Mapping (VSLAM),” a Presentation from Cadence
Amol Borkar, Product Marketing Director, and Shrinivas Gadkari, Design Engineering Director, both of Cadence, co-present the “Introduction to Visual Simultaneous Localization and Mapping (VSLAM)” tutorial at the May 2024 Embedded Vision Summit. Simultaneous localization and mapping (SLAM) is widely used in industry and has numerous applications where camera or ego-motion… “Introduction to Visual Simultaneous Localization
Scalable Public Safety with On-device AI: How Startup FocusAI is Filling Enterprise Security Market Gaps
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm Enterprise security is not just big business, it’s about keeping you safe: Here’s how engineer-turned-CTO Sudhakaran Ram collaborated with us to do just that. Key Takeaways: On-device AI enables superior enterprise-grade security. Distributed computing cost-efficiently enables actionable
Untether AI Demonstration of Video Analysis Using the runAI Family of Inference Accelerators
Max Sbabo, Senior Application Engineer at Untether AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Sbabo demonstrates his company’s its AI inference technology with AI accelerator cards that leverage the capabilities of the runAI family of ICs in a PCI-Express form factor. This demonstration
The Role of AI-driven Embedded Vision Cameras in Self-checkout Loss Prevention
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Self-checkout usage is rapidly growing and redefining retail experiences. This shift has led to retail losses that can only be overcome by AI-based embedded vision. Explore the types of retail shrinkage, how AI helps, and
Inuitive Demonstration of the M4.51 Depth and AI Sensor Module Based on the NU4100 Vision Processor
Shay Harel, field application engineer at Inuitive, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Harel demonstrates the capabilities of his company’s M4.51 sensor module using a simple Python script that leverages Inuitive’s API for real-time object detection. The M4.51 sensor module, based on the
Interactive AI Tool Delivers Immersive Video Content to Blind and Low-vision Viewers
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. New research aims to revolutionize video accessibility for blind or low-vision (BLV) viewers with an AI-powered system that gives users the ability to explore content interactively. The innovative system, detailed in a recent paper, addresses significant gaps