Development Tools for Embedded Vision
ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS
The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.
Both general-purpose and vender-specific tools
Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.
Heterogeneous software development in an integrated development environment
Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.
This blog post was originally published at Codeplay Software’s website. It is reprinted here with the permission of Codeplay Software. Codeplay has been a part of the SYCL™ community from the beginning, and our team has worked with peers from some of the largest semiconductor vendors including Intel and Xilinx for the past 5 years
Joseph Spisak, Product Manager at Facebook, delivers the presentation “PyTorch Deep Learning Framework: Status and Directions” at the Embedded Vision Alliance’s December 2019 Vision Industry and Technology Forum. Spisak gives an update on the Torch deep learning framework and where it’s heading. “PyTorch Deep Learning Framework: Status and Directions,” a Presentation from Facebook Register or
“Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product,” a Presentation from Cocoon Health
Pavan Kumar, Co-founder and CTO of Cocoon Cam (formerly Cocoon Health), delivers the presentation “Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Kumar explains how his company is evolving its use of edge and cloud vision computing… “Edge/Cloud Tradeoffs and Scaling a
Shruti Agarwal, Ph.D. Candidate at U.C. Berkeley, delivers the presentation “Creating, Weaponizing, and Detecting Deep Fakes” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Agarwal explains how to use computer vision to detect “deepfakes.” “Creating, Weaponizing, and Detecting Deep Fakes,” a Presentation from U.C. Berkeley Register or sign in to access
Raghuraman Krishnamoorthi, Software Engineer at Facebook, delivers the presentation “Quantizing Deep Networks for Efficient Inference at the Edge” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Krishnamoorthi gives an overview of practical deep neural network quantization techniques and tools. “Quantizing Deep Networks for Efficient Inference at the Edge,” a Presentation from
“Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors,” a Presentation from IHS Markit
Tom Hackenberg, Principal Analyst at IHS Markit, presents the “Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors” tutorial at the May 2019 Embedded Vision Summit. Artificial intelligence is not a new concept. Machine learning has been used for decades in large server and… “Embedded Vision Applications Lead Way
Chris Osterwood, Founder and CEO of Capable Robot Components, presents the “How to Choose a 3D Vision Sensor” tutorial at the May 2019 Embedded Vision Summit. Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are… “How to Choose a 3D
Bert Moons, Hardware Design Architect at Synopsys, presents the “Five+ Techniques for Efficient Implementation of Neural Networks” tutorial at the May 2019 Embedded Vision Summit. Embedding real-time, large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory and bandwidth requirements. System architects can mitigate… “Five+ Techniques for Efficient Implementation