Development Tools for Embedded Vision
ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS
The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.
Both general-purpose and vender-specific tools
Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.
Heterogeneous software development in an integrated development environment
Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.
This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. Back in 2018, Intel launched the Intel® Distribution of OpenVINO™ toolkit. Since then, it’s been widely adopted by partners and developers to deploy AI-powered applications in various industries, from self-checkout kiosks to medical imaging to industrial robotics.
This blog post was originally published at Codeplay Software’s website. It is reprinted here with the permission of Codeplay Software. Codeplay has been a part of the SYCL™ community from the beginning, and our team has worked with peers from some of the largest semiconductor vendors including Intel and Xilinx for the past 5 years
Joseph Spisak, Product Manager at Facebook, delivers the presentation “PyTorch Deep Learning Framework: Status and Directions” at the Embedded Vision Alliance’s December 2019 Vision Industry and Technology Forum. Spisak gives an update on the Torch deep learning framework and where it’s heading.
“Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product,” a Presentation from Cocoon Health
Pavan Kumar, Co-founder and CTO of Cocoon Health (formerly Cocoon Cam), delivers the presentation “Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Kumar explains how his company is evolving its use of edge and cloud vision computing in continuing to bring new
Shruti Agarwal, Ph.D. Candidate at U.C. Berkeley, delivers the presentation “Creating, Weaponizing, and Detecting Deep Fakes” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Agarwal explains how to use computer vision to detect “deepfakes.”
Raghuraman Krishnamoorthi, Software Engineer at Facebook, delivers the presentation “Quantizing Deep Networks for Efficient Inference at the Edge” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Krishnamoorthi gives an overview of practical deep neural network quantization techniques and tools.
“Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors,” a Presentation from IHS Markit
Tom Hackenberg, Principal Analyst at IHS Markit, presents the “Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors” tutorial at the May 2019 Embedded Vision Summit. Artificial intelligence is not a new concept. Machine learning has been used for decades in large server and high performance computing environments. Why
Chris Osterwood, Founder and CEO of Capable Robot Components, presents the “How to Choose a 3D Vision Sensor” tutorial at the May 2019 Embedded Vision Summit. Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are a wide variety of sensors