Development Tools for Embedded Vision
ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS
The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.
Both general-purpose and vender-specific tools
Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.
Heterogeneous software development in an integrated development environment
Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.
AI on the Edge: Generative AI Technology Impacts, Insights and Predictions
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Explore topics about on-device generative AI in 2023 with our subject matter experts Welcome back to AI on the Edge. In our last roundup of posts, we introduced our new series with topics relating to the ever-growing
Prompt Engineering in Vision
This blog post was originally published at Embedl’s website. It is reprinted here with the permission of Embedl. Recently, we have all witnessed the stunning success of prompt engineering in natural language processing (NLP) applications like ChatGPT: one supplies a text prompt to a large language model (LLM) like GPT-4 and the result is a
FRAMOS Announces a New Strategic Partnership with RidgeRun to Deliver Cutting-edge Vision Solutions
6th of March 2024 – FRAMOS, a leading imaging company, is excited to announce a new strategic partnership with software-only company RidgeRun. RidgeRun is a software-only company focused on embedded software development for multiple system on chips. RidgeRun has more than 20 years of experience in video and audio solutions, from Linux driver development to
The Role of 3D LiDAR Technology in ITS
This blog post was originally published at Outsight’s website. It is reprinted here with the permission of Outsight. Intelligent Transportation Systems (ITS) are crucial for enhancing road safety and efficiency. LiDAR brings these systems to a whole new level. With the integration of advanced technologies like 3D LiDAR, ITS are undergoing a transformation, offering unprecedented
How is AI Transforming the Semiconductor Industry?
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. The semiconductor industry stands as a driving force behind technological advancements, powering the devices that have become integral to modern life. As the demand for faster, smaller and more energy-efficient chips continues to grow, the industry faces
5 Generative AI Use Cases Impacting Our Lives
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Generative AI has unlocked the power of AI, and the potential is unlimited Since the introduction of ChatGPT in November 2022, the world has become enamored with the potential for artificial intelligence (AI). ChatGPT ushered in the
Detecting Real-time Waste Contamination Using Edge Computing and Video Analytics
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The past few decades have witnessed a surge in rates of waste generation, closely linked to economic development and urbanization. This escalation in waste production poses substantial challenges for governments worldwide in terms of efficient processing and
What Do Vision Transformers See?
This blog post was originally published at Embedl’s website. It is reprinted here with the permission of Embedl. CNNs have long been the workhorses of vision ever since they achieved the dramatic breakthroughs of super-human performance with AlexNet in 2012. But recently, the vision transformer (ViT) is changing the picture. CNNs have a an inductive