fbpx

NVIDIA

NVIDIA EGX Edge AI Platform Brings Real-Time AI to Manufacturing, Retail, Telco, Healthcare and Other Industries

Ecosystem Expands with EGX A100 and EGX Jetson Xavier NX  Supported by AI-Optimized, Cloud-Native, Secure Software to Power New Wave of 5G and Robotics Applications SANTA CLARA, Calif., May 14, 2020 (GLOBE NEWSWIRE) — NVIDIA today announced two powerful products for its EGX Edge AI platform — the EGX A100 for larger commercial off-the-shelf servers […]

NVIDIA EGX Edge AI Platform Brings Real-Time AI to Manufacturing, Retail, Telco, Healthcare and Other Industries Read More +

NVIDIA Releases Jetson Xavier NX Developer Kit with Cloud-Native Support

Cloud-Native Support Comes to Entire Jetson Platform Lineup, Making It Easier to Build, Deploy and Manage AI at the Edge Thursday, May 14, 2020—GTC 2020—NVIDIA today announced availability of the NVIDIA® Jetson Xavier™ NX developer kit with cloud-native support — and the extension of this support to the entire NVIDIA Jetson™ edge computing lineup for

NVIDIA Releases Jetson Xavier NX Developer Kit with Cloud-Native Support Read More +

Training with Custom Pretrained Models Using the NVIDIA Transfer Learning Toolkit

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Supervised training of deep neural networks is now a common method of creating AI applications. To achieve accurate AI for your application, you generally need a very large dataset especially if you create… Training with Custom Pretrained Models

Training with Custom Pretrained Models Using the NVIDIA Transfer Learning Toolkit Read More +

Speeding Up Deep Learning Inference Using TensorRT

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This is an updated version of How to Speed Up Deep Learning Inference Using TensorRT. This version starts from a PyTorch model instead of the ONNX model, upgrades the sample application to use TensorRT 7, and replaces the

Speeding Up Deep Learning Inference Using TensorRT Read More +

NVIDIA VRSS, a Zero-Effort Way to Improve Your VR Image Quality

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The Virtual Reality (VR) industry is in the midst of a new hardware cycle – higher resolution headsets and better optics being the key focus points for the device manufacturers. Similarly on the software front, there has been

NVIDIA VRSS, a Zero-Effort Way to Improve Your VR Image Quality Read More +

Accelerating WinML and NVIDIA Tensor Cores

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Every year, clever researchers introduce ever more complex and interesting deep learning models to the world. There is of course a big difference between a model that works as a nice demo in isolation and a model that

Accelerating WinML and NVIDIA Tensor Cores Read More +

Speeding Up Deep Learning Inference Using TensorFlow, ONNX, and TensorRT

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Starting with TensorRT 7.0,  the Universal Framework Format (UFF) is being deprecated. In this post, you learn how to deploy TensorFlow trained deep learning models using the new TensorFlow-ONNX-TensorRT workflow. Figure 1 shows the high-level workflow of TensorRT.

Speeding Up Deep Learning Inference Using TensorFlow, ONNX, and TensorRT Read More +

Learning to Rank with XGBoost and GPU

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. XGBoost is a widely used machine learning library, which uses gradient boosting techniques to incrementally build a better model during the training phase by combining multiple weak models. Weak models are generated by computing the gradient descent using

Learning to Rank with XGBoost and GPU Read More +

Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Deep neural network takes a two-stage approach to address lidar processing challenges. Editor’s note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how

Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars Read More +

Building a Real-time Redaction App Using NVIDIA DeepStream, Part 2: Deployment

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This post is the second in a series (Part 1) that addresses the challenges of training an accurate deep learning model using a large public dataset and deploying the model on the edge for real-time inference using NVIDIA

Building a Real-time Redaction App Using NVIDIA DeepStream, Part 2: Deployment Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top