fbpx

Tools

Accelerating WinML and NVIDIA Tensor Cores

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Every year, clever researchers introduce ever more complex and interesting deep learning models to the world. There is of course a big difference between a model that works as a nice demo in isolation and a model that […]

Accelerating WinML and NVIDIA Tensor Cores Read More +

Speeding Up Deep Learning Inference Using TensorFlow, ONNX, and TensorRT

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Starting with TensorRT 7.0,  the Universal Framework Format (UFF) is being deprecated. In this post, you learn how to deploy TensorFlow trained deep learning models using the new TensorFlow-ONNX-TensorRT workflow. Figure 1 shows the high-level workflow of TensorRT.

Speeding Up Deep Learning Inference Using TensorFlow, ONNX, and TensorRT Read More +

“OpenCV: Past, Present and Future,” a Presentation from OpenCV.org

Gary Bradski, the President and CEO of OpenCV.org, delivers the presentation “OpenCV: Past, Present and Future” at the Edge AI and Vision Alliance’s March 2020 Vision Industry and Technology Forum. Bradski shares the latest developments in the OpenCV open source library for computer vision and deep learning applications, as well as where OpenCV is heading.

“OpenCV: Past, Present and Future,” a Presentation from OpenCV.org Read More +

Maximize CPU Inference Performance with Improved Threads and Memory Management in Intel Distribution of OpenVINO Toolkit

This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. The popularity of convolutional neural network (CNN) models and the ubiquity of CPUs means that better inference performance can deliver significant gains to a larger number of users than ever before. As multi-core processors become the norm,

Maximize CPU Inference Performance with Improved Threads and Memory Management in Intel Distribution of OpenVINO Toolkit Read More +

CEVA Announces DSP and Voice Neural Networks Integration with TensorFlow Lite for Microcontrollers

WhisPro™ speech recognition software for voice wake words and custom command models now available with open source TensorFlow Lite for Microcontrollers implementing machine learning at the edge TensorFlow Lite for Microcontrollers from Google is already optimized and available for CEVA-BX DSP cores, accelerating the use of low power AI in conversational and contextual awareness applications

CEVA Announces DSP and Voice Neural Networks Integration with TensorFlow Lite for Microcontrollers Read More +

What Is Object Detection?

This article was originally published at MathWorks’ website. It is reprinted here with the permission of MathWorks. 3 Things You Need to Know Object detection is a computer vision technique for locating instances of objects in images or videos. Object detection algorithms typically leverage machine learning or deep learning to produce meaningful results. When humans

What Is Object Detection? Read More +

Learning to Rank with XGBoost and GPU

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. XGBoost is a widely used machine learning library, which uses gradient boosting techniques to incrementally build a better model during the training phase by combining multiple weak models. Weak models are generated by computing the gradient descent using

Learning to Rank with XGBoost and GPU Read More +

Simplifying Cloud to Edge AI Deployments with the Intel Distribution of OpenVINO Toolkit, Microsoft Azure, and ONNX Runtime

This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. Our life is frittered away by detail. Simplify, Simplify, Simplify Henry David Thoreau Significant technological innovations usually follow a well-established pattern. A small group of brilliant minds stumble upon some incredible innovation, which is then quickly adopted

Simplifying Cloud to Edge AI Deployments with the Intel Distribution of OpenVINO Toolkit, Microsoft Azure, and ONNX Runtime Read More +

Streamline Your Intel Distribution of OpenVINO Toolkit Development with Deep Learning Workbench

This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. Back in 2018, Intel launched the Intel® Distribution of OpenVINO™ toolkit. Since then, it’s been widely adopted by partners and developers to deploy AI-powered applications in various industries, from self-checkout kiosks to medical imaging to industrial robotics.

Streamline Your Intel Distribution of OpenVINO Toolkit Development with Deep Learning Workbench Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top