Software for Embedded Vision

Visual Intelligence: Foundation Models + Satellite Analytics for Deforestation (Part 1)
This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. Satellite imagery has revolutionized how we monitor Earth’s forests, offering unprecedented insights into deforestation patterns. In this two-part series, we explore both traditional and cutting-edge approaches to forest monitoring, using Bulgaria’s Central Balkan National Park as our

OpenMV Demonstration of Its New N6 and AE3 Low Power Python Programmable AI Cameras and Other Products
Kwabena Agyeman, President and Co-founder of OpenMV, demonstrates the company’s latest edge AI and vision technologies and products at the March 2025 Edge AI and Vision Alliance Forum. Specifically, Agyeman demonstrates the company’s new N6 and AE3 low power Python programmable AI cameras, along with the FLIR BOSON thermal camera and Prophesee GENX320 event camera.

RGo Robotics Implements Vision-based Perception Engine on Qualcomm SoCs for Robotics Market
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Mobile robotics developers equip their machines to behave autonomously in the real world by generating facility maps, localizing within them and understanding the geometry of their surroundings. Machines like autonomous mobile robots (AMR), automated guided vehicles (AGV)

Key Insights from Data Centre World 2025: Sustainability and AI
Scope 2 power-based emissions and Scope 3 supply chain emissions make the biggest contribution to the data center’s carbon footprint. IDTechEx’s Sustainability for Data Centers report explores which technologies can reduce these emissions. Major data center players converged in London, UK, in the middle of March for the 2025 iteration of Data Centre World. Co-located

Explaining Tokens — The Language and Currency of AI
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Under the hood of every AI application are algorithms that churn through data in their own language, one based on a vocabulary of tokens. Tokens are tiny units of data that come from breaking down bigger chunks

The Silent Threat to AI Initiatives
This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. The single, most common reason why most AI projects fail is not technical. Having spent almost 15 years in the AI & data services space, I can confidently say that the primary cause of failure for AI

L2+ ADAS Outpaces L3 in Europe, US$4B by 2042
14 ADAS Features Deployed in EU. Privately owned Level 3 autonomous vehicles encountered significant regulatory setbacks in 2017 when Audi attempted to pioneer the market with the L3-ready A8. Regulatory uncertainty quickly stalled these ambitions, delaying the introduction of true L3 autonomy. By 2021, a clearer regulatory framework emerged under UNECE guidelines, affecting Europe and

Exploring the COCO Dataset
This article was originally published at 3LC’s website. It is reprinted here with the permission of 3LC. The COCO dataset is a cornerstone of modern object detection, shaping models used in self-driving cars, robotics, and beyond. But what happens when we take a closer look? By examining annotations, embeddings, and dataset patterns at a granular

Video Understanding: Qwen2-VL, An Expert Vision-language Model
This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. Qwen2-VL, an advanced vision language model built on Qwen2 [1], sets new benchmarks in image comprehension across varied resolutions and ratios, while also tackling extended video content. Though Qwen2-V excels at many fronts, this article explores the model’s

Powering IoT Developers with Edge AI: the Qualcomm RB3 Gen 2 Kit is Now Supported in the Edge Impulse Platform
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. The Qualcomm RB3 Gen 2 Development Kit has been designed to help you develop high-performance IoT and edge AI applications. With powerful AI acceleration, pre-validated peripherals, and extensive software support, this kit enables every engineer to move

Intel Accelerates AI at the Edge Through an Open Ecosystem
Intel empowers partners to seamlessly integrate AI into existing infrastructure with its new Intel AI Edge Systems, Edge AI Suites and Open Edge Platform software. What’s New: Intel is unveiling its new Intel® AI Edge Systems, Edge AI Suites and Open Edge Platform initiatives. These offerings help streamline and speed up AI adoption at the edge

NVIDIA Announces Isaac GR00T N1 — the World’s First Open Humanoid Robot Foundation Model — and Simulation Frameworks to Speed Robot Development
Now Available, Fully Customizable Foundation Model Brings Generalized Skills and Reasoning to Humanoid Robots NVIDIA, Google DeepMind and Disney Research Collaborate to Develop Next-Generation Open-Source Newton Physics Engine New Omniverse Blueprint for Synthetic Data Generation and Open-Source Dataset Jumpstart Physical AI Data Flywheel March 18, 2025—GTC—NVIDIA today announced a portfolio of technologies to supercharge humanoid

NVIDIA Announces Major Release of Cosmos World Foundation Models and Physical AI Data Tools
New Models Enable Prediction, Controllable World Generation and Reasoning for Physical AI Two New Blueprints Deliver Massive Physical AI Synthetic Data Generation for Robot and Autonomous Vehicle Post-Training 1X, Agility Robotics, Figure AI, Skild AI Among Early Adopters March 18, 2025—GTC—NVIDIA today announced a major release of new NVIDIA Cosmos™ world foundation models (WFMs), introducing

NVIDIA Unveils Open Physical AI Dataset to Advance Robotics and Autonomous Vehicle Development
Expected to become the world’s largest such dataset, the initial release of standardized synthetic data is now available to robotics developers as open source. Teaching autonomous robots and vehicles how to interact with the physical world requires vast amounts of high-quality data. To give researchers and developers a head start, NVIDIA is releasing a massive,

Build Real-time Multimodal XR Apps with NVIDIA AI Blueprint for Video Search and Summarization
This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. With the recent advancements in generative AI and vision foundational models, VLMs present a new wave of visual computing wherein the models are capable of highly sophisticated perception and deep contextual understanding. These intelligent solutions offer a promising

OpenMV Unveils the N6 and AE3: High-performance, Low-power AI Vision Cameras for Makers and Professionals
March 17, 2025, San Francisco, CA – OpenMV is excited to announce the launch of the OpenMV N6 and OpenMV AE3, two groundbreaking machine vision cameras designed to bring real-time AI capabilities to microcontrollers. Backed by years of expertise in embedded vision, OpenMV is making high-performance AI vision accessible to developers, researchers, and hobbyists alike

Navigating the AI Implementation Journey: Buy or Build?
This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. Many companies waste millions of dollars and critical time-to-market because they make the wrong decision on a seemingly simple question: should you buy an off-the-shelf AI solution or build your own? If you’re an AI project owner

Optimizing Edge AI for Effective Real-time Decision Making in Robotics
This blog post was originally published at Geisel Software’s website. It is reprinted here with the permission of Geisel Software. Optimizing Edge AI Key Takeaways Instant Decisions, Real-World Impact: Edge AI empowers robots to react in milliseconds, enabling life-saving actions in critical scenarios like autonomous vehicle collision avoidance and rapid search-and-rescue missions. Unshakeable Reliability, Unbreachable