Accelerate Autonomous Vehicle Development with the NVIDIA DRIVE AGX Thor Developer Kit

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.

Autonomous vehicle (AV) technology is rapidly evolving, fueled by ever-larger and more complex AI models deployed at the edge. Modern vehicles now require not only advanced perception and sensor fusion, but also end-to-end deep learning pipelines that enable comprehensive environment understanding, multimodal fusion, and real-time decision making—all processed entirely onboard.

With automotive use cases expanding to include vision-language-action models (VLAs), large language models (LLMs), generative AI for personalized AI assistance, and multimodal experiences, the need for powerful compute acceleration has never been greater. This shift creates an urgent need for platforms that make it easier for developers and OEMs to develop, optimize, and deploy these resource-intensive applications directly within the vehicle.

Today, we’re excited to announce the general availability of the NVIDIA DRIVE AGX Thor Developer Kit, powered by NVIDIA Blackwell GPUs, next-generation Arm CPUs, and the advanced NVIDIA DriveOS 7 software stack. NVIDIA DRIVE AGX Thor has been developed for ISO 26262 ASIL-D functional safety and ISO 21434 security standards. The production-ready developer toolchain empowers automotive OEMs, Tier-1 software providers, and researchers to implement cutting-edge AI at the edge. This advancement revolutionizes autonomous driving and the in-vehicle experience, thereby accelerating the journey toward intelligent mobility.

Video 1. Learn about the NVIDIA DRIVE AGX Thor Developer Kit accessories, connectivity options, and more

How does NVIDIA DRIVE AGX Thor enable the next era of automotive edge intelligence?

DRIVE AGX Thor is a significant leap forward in the NVIDIA lineage of in-vehicle edge computing platforms. As the flagship SoC of the DRIVE family, Thor builds on the successes of Orin and prior-generation solutions, scaling up system performance, efficiency, and feature flexibility to an entirely new level.

DRIVE AGX Thor expands what’s computationally possible in an on-vehicle edge device, enabling automakers to stay ahead of future demands.

Feature NVIDIA DRIVE AGX Thor SoC NVIDIA DRIVE AGX Orin SoC
GPU Blackwell GPU with generative AI Engine: FP32/16/8/4-bit support; Up to 1,000 INT8 TFLOPS; 2,000 FP4 TFLOPS CUDA Tensor Core GPU + Deep Learning Accelerator:
Up to 254 INT8 TFLOPS
CPU 14x Arm Neoverse V3AE 2.3x higher SPECrate 2017_int_base (est.) versus Orin 12x Arm Cortex-A78-AE (Hercules)SPECrate 2017_int_base (est.) 1x Orin
Memory bandwidth 273 GB/s LPDDR5X 205 GB/s LPDDR5
Safety Supports up to ISO 26262 ASIL-D use cases

Table 1. DRIVE AGX Thor improves system performance, efficiency, and feature flexibility compared to DRIVE AGX Orin

How is the NVIDIA DRIVE AGX Thor Developer Kit purpose-built for developers?

The DRIVE AGX Thor Developer Kit provides a reliable platform to rapidly deploy, test, and refine applications before moving to production-ready automotive compute platforms. It is compatible with a comprehensive set of ecosystem-supported sensors and the next-generation NVIDIA DRIVE Hyperion sensor architecture. It is equipped with a broad range of automotive-standard interfaces and enables seamless prototyping and validation across diverse automotive use cases.

The DRIVE AGX Thor Developer Kit is now available for pre-order, with delivery in September 2025. It’s available in different SKUs, offering tailored support for bench development workflows and vehicle-level integration requirements:

  • DRIVE AGX Thor Developer Kit SKU 10 for bench development
  • DRIVE AGX Thor Developer Kit SKU 12 for in-vehicle development

Figure 1. The NVIDIA DRIVE AGX Thor Developer Kit is available in different SKUs

Aspect Details
Platform Powered by NVIDIA Thor SoC
System RAM 64 GB LPDDR5X @ 4266 MHz
Storage 256 GB UFS
Ethernet 4x 100MbE, 16x 1GbE, 6x 10GbE (all H-MTD connector)
Camera 16x GMSL2 cameras, 2x GMSL3 cameras (Quad Fakra connectors)
USB 1x USB 3.2 (U1 data), 1x USB 3.2 (U2 flashing), 1x USB 2.0 (U3 data), 1x USB 2.0 (Debug), all USB-C
Display 5x GMSL3 links (Quad Fakra), 1x DisplayPort (up to 4K @ 60 Hz)
PCIe 1x PCIe Gen5 x4 or 2x PCIe Gen5 x2 (MiniSAS HD)
Thor Dev Kit TDP 350 W

Table 2.  Technical features of the DRIVE AGX Thor Developer Kit

The DRIVE AGX Thor Development Kit Combines high-performance computing with an automotive-compliant development environment, which accelerates innovation, reduces risk, and ensures a smooth transition from early-stage development to vehicle-level deployment.

Powered by NVIDIA DriveOS 7

NVIDIA DriveOS provides a complete software foundation for AVs, combining high-performance AI computing, real-time processing, and safety standard compliance in a single platform. The key enhancements in NVIDIA DriveOS 7 are detailed in this section and in Figure 2.

Figure 2. NVIDIA DriveOS is the foundation for high-performance, safety-critical automotive applications

Integrated with NVIDIA TensorRT 10

With dynamic kernel generation and fusion, improved ModelOpt quantization support (including INT4 AWQ), and Blackwell native NVFP4 acceleration, TensorRT 10 enhances both performance and memory efficiency for deep learning tasks. Expanded V3 plugin support, faster build times, strong typing of networks, and robust debugging tools add versatility for even the most complex AI models.

NVIDIA DriveOS LLM SDK Runtime

NVIDIA DriveOS 7 features DriveOS LLM SDK, a pure C++ LLM runtime with minimal dependencies for low latency. Optimized for FP16, FP8, INT4, and FP4 quantization, this runtime provides advanced features like speculative decoding, KV caching, LoRA-based model customization, dynamic batching for throughput, and ready-to-use support for popular LLMs and VLMs. The LLM SDK Runtime is provided as source code along with samples showcasing model inference and SDK features. For more details, see Streamline LLM Deployment for Autonomous Vehicle Applications with NVIDIA DriveOS LLM SDK.

Comprehensive security and safety

Targeting highest level of safety, ISO 26262 ASIL-D and security ISO 21434 CAL 4.

CUDA 13 capabilities

New CUDA 13 features—including context pause/resume, batched memory copy (memcpy), and hardware-aware Tensor maps—give developers the tools to optimize application performance.

NVIDIA DriveWorks

NVIDIA DriveWorks is now deeply integrated with NVIDIA DriveOS, validated by real-world AV program deployments, and optimized for Thor SoC Architecture. This ensures a robust suite of perception, sensor fusion, and vehicle-systems SDKs for production vehicles.

Upgraded development ecosystem

Updated versions of Ubuntu, Docker, GCC, C++, the Linux Kernel, and Yocto for faster performance, stronger security, expanded hardware compatibility, and a more streamlined, modern development experience.

Integration with NVIDIA TensorRT 10

NVIDIA TensorRT 10, a cornerstone of NVIDIA DriveOS 7, significantly accelerates AV development by delivering enhanced AI performance. Table 3 shows comprehensive foundational upgrades that enhance developer productivity.

Quantization and performance INT4 weight-only quantization Enables faster inference and lower power consumption, especially on memory-bound models
Block-wise scaling Reduces information loss in low-precision formats like INT8 and FP8 by preserving dynamic range locally within each block
NVFP4 precision for Blackwell GPUs Allows larger models to run efficiently with minimal accuracy degradation, particularly in memory- and bandwidth-constrained scenarios
Tiling optimization Allows configurability of  the optimization level (from “none” to “full”) and sets an L2 cache usage limit to trade off build time for runtime speed
Improved developer experience V3 plugins Allows faster engine builds and reduced deployment time when developers reuse plugin layers across different sessions or hardware targets
Debug Tensors Identifies specific intermediate Tensors during model execution for their value inspection for better debugging
NVIDIA TensorRT Model Optimizer Quantization, sparsity, distillation Comprehensive library of post-training and training-in-the-loop model optimizations for TensortRT 10 and above

Table 3. TensorRT 10 improvements for automotive applications

NVIDIA DriveOS LLM SDK Runtime

One of the core innovations enabled by the NVIDIA DRIVE Thor platform and NVIDIA DriveOS is its ability to run powerful LLMs and VLAs efficiently at the edge, directly within the vehicle. This is made possible through the DriveOS LLM SDK, a lightweight, high-performance C++ toolkit designed to showcase the capabilities of TensorRT for deploying LLMs and VLMs on automotive platforms.

DriveOS LLM SDK provides a flexible toolkit for bringing the power of LLMs right into the vehicle, enabling rich conversational AI, multimodal inference, driver and occupant monitoring, and much more.

Which LLMs are supported by the DriveOS LLM SDK Runtime?

The DriveOS LLM SDK Runtime natively supports several Llama and Qwen models, with more model support planned for future releases (Table 4).

Model FP16 INT4 FP8 NVFP4
Llama 3 8B Instruct Yes Yes Yes Yes
Llama 3.1 8B Yes Yes Yes Yes
Llama 3.2 3B Yes Yes Yes Yes
Qwen2.5-7B-Instruct Yes Yes Yes Yes
Qwen2-7B-Instruct Yes Yes Yes Yes
Qwen2.5-0.5B Yes Yes Yes Yes

Table 4.  Supported LLM models through DriveOS LLM SDK Runtime

Which VLMs are supported by the DriveOS LLM SDK Runtime?/h3>
The DriveOS LLM SDK Runtime natively supports Qwen VLMs for edge deployment, with more model support planned for future releases (Table 5).

Model FP16 INT4 FP8 NVFP4
Qwen2-VL-2B-Instruct Yes Yes Yes Yes
Qwen2-VL-7B-Instruct Yes Yes Yes Yes

Table 5. Supported VLM models through DriveOS LLM SDK Runtime

With future releases, more model options and pipeline tools will become available for seamless automotive integration.

NVIDIA DriveOS Linux profiles

NVIDIA DriveOS Linux with Safety Extensions is a reference SW platform based on Ubuntu 24.04 LTS Linux. It’s intended for prototyping and development, and contains images with debug capabilities. This enables developers to build production-ready AV applications while meeting the highest automotive functional safety standards. The DriveOS Linux supports three profiles: development, production, and test (Table 6).

      Aspect Development profile Safety extensions  production profile Safety extensions
test profile
Intended use Active development Production deployment Testing/validation (preproduction)
Safety extensions Not included Enabled by default Enabled by default
Debug capabilities Available Disabled Disabled (UART restored)
Logging and profiling Available Disabled Disabled
Console access Enabled Disabled UART access restored
SSH/NFS access Enabled Disabled SSH restored
Security hardening Minimal Full hardening Full hardening
Boot KPI optimizations None Enabled Enabled
Production recommendations Docs required Ready out-of-the-box For test only, not final deployment

Table 6. DriveOS Linux profiles include development, production, and test

Get started with the NVIDIA DRIVE AGX Thor Developer Kit

Ready to accelerate your AV development with the NVIDIA DRIVE AGX Thor Developer Kit? To get started, register for an NVIDIA Developer account and NVIDIA DRIVE AGX SDK Developer Program membership. You can also download NVIDIA DriveOS 7.0.3 Linux, the latest purpose-built operating system for AVs.

The DRIVE AGX Thor Developer Kit is now available for pre-order.

Streamline your development environment and accelerate setup with NGC DriveOS Docker containers. Ask questions and join the community in the DRIVE AGX Thor Developer Forum.

Abhinaw Priyadershi
Staff Technical Program Manager for DriveOS, NVIDIA

Ahmed Attia
Senior Developer Relations Manager, NVIDIA

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top