NVIDIA CUDA™ Fermi™ Compatibility Guide

Version 1.0

February 2010

Preface

What Is This Document?

This Fermi Compatibility Guide for CUDA Applications is an application note to help developers ensure that their CUDA applications will run on GPUs based on the Fermi Architecture. This guide is intended to provide guidance to developers who are already familiar with programming in CUDA C/C++ and want to ensure their software applications are compatible with Fermi.

IMPORTANT NOTE:

Prior to the introduction of the Fermi architecture, all NVIDIA Tesla®-branded products were based on the Tesla architecture. For the purposes of this document, the term "Tesla" refers only to the GPU architecture and not to any particular NVIDIA product. Hereinafter, Tesla refers to devices of compute capability 1.x, and Fermi refers to devices of compute capability 2.0.

Software Requirements

1.1 Application Compatibility on Fermi

The NVIDIA CUDA C compiler, nvcc, can be used to generate both architecture-specific CUBIN files and forward-compatible PTX versions of each kernel. Applications that already include PTX versions of their kernels should work as-is on Fermi GPUs. Applications that only support specific GPU architectures via CUBIN files, however, will either need to provide a PTX version of their kernels that can be just-in-time (JIT) compiled for Fermi and future GPUs or to be updated to include Fermi-specific CUBIN versions of their kernels. For this reason, to ensure forward compatibility with CUDA architectures introduced after the application has been released, it is recommended that all applications support launching PTX versions of their kernels.

Each CUBIN file targets a specific compute capability version and is forward-compatible only with CUDA architectures of the same major version number (e.g., CUBIN files that target compute capability 1.0 are supported on all compute-capability 1.x (Tesla) devices but are not supported on compute-capability 2.0 (Fermi) devices).

Verifying Fermi Compatibility for Existing Applications

1.2.1 My application uses the CUDA Runtime API with CUDA Toolkit 2.1, 2.2, or 2.3.

How can I confirm that my application is ready to run on Fermi?

Answer: CUDA applications built using the CUDA Toolkit versions 2.1 through 2.3 are compatible with Fermi as long as they are built to include PTX versions of their kernels. NVIDIA Driver versions 195.xx or newer allow the application to use the PTX JIT code path. To test that PTX JIT is working for your application, you can do the following:

  • Go to the NVIDIA website, and install the latest R195 driver.
  • Set the system environment flag CUDA_FORCE_PTX_JIT=1
  • Launch your application.

When starting a CUDA application for the first time with the above environment flag, the CUDA driver will JIT compile the PTX for each CUDA kernel that is used into native CUBIN code. The generated CUBIN for the target GPU architecture is cached by the CUDA driver. This cache persists across system shutdown/restart events.

If this test passes, then your application is ready for Fermi.

Building Applications with Fermi Support

1.3.1 My application is a CUDA Runtime API application.

What steps do I need to take to support Fermi?

Answer: The compilers included in the CUDA Toolkit 2.1, 2.2, and 2.3 generate CUBIN files native to the Tesla architecture. To allow support for Fermi and future architectures when using these versions of the CUDA Toolkit, the compiler can generate a PTX version of each kernel. By default, the PTX version is included in the executable and is available to be run on Fermi devices via just-in-time (JIT) compilation.

Beginning with version 3.0 of the CUDA Toolkit, nvcc can generate CUBIN files native to the Fermi architecture as well. When using the CUDA Toolkit 3.0 or later, to ensure that nvcc will generate CUBIN files for all released GPU architectures as well as a PTX version for future GPU architectures, specify the appropriate "-arch=sm_xx" parameter on the nvcc command line as shown below.

When a CUDA application launches a kernel, the CUDA Runtime library (CUDART) determines the compute capability of each GPU in the system and uses this information to find the best matching CUBIN or PTX version of the kernel. If a CUBIN file supporting the architecture of the GPU on which the application is launching the kernel is available, it is used; otherwise, the CUDA Runtime will load the PTX and JIT compile the PTX to the CUBIN format before launching it on the GPU.

Below are the compiler settings to build cuda_kernel.cu to run on Tesla devices natively and Fermi devices via PTX. The main advantage of providing the native code is to save the end user the time it takes to PTX JIT a CUDA kernel that has been compiled to PTX. However, since the CUDA driver will cache the native ISA generated as a result of the PTX JIT, this is mostly a one-time cost. There will still be some additional per-invocation overhead, as the CUDA runtime will need to check the architecture of the current GPU and explicitly call the best-available version of the CUDA kernel.

Windows:

nvcc.exe -ccbin "C:\vs2008\VC\bin" -I"C:\CUDA\include" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT" -arch=sm_10 --compile -o "Release\cuda_kernel.cu.obj" "cuda_kernel.cu"

Mac/Linux:

/usr/local/cuda/bin/nvcc -arch=sm_10 --compiler-options -fno-strict-aliasing -I. -I/usr/local/cuda/include -DUNIX -O2 -o release/cuda_kernel.cu.o -c cuda_kernel.cu

Note: the nvcc command-line option "-arch=sm_xx" is a shorthand equivalent to the following more explicit -gencode command-line options:

-gencode=arch=compute_xx,code=sm_xx
-gencode=arch=compute_xx,code=compute_xx

The -gencode options must be used instead of -arch if you want to compile CUBIN or PTX code for multiple target architectures, as shown below.

Alternatively, with version 3.0 of the CUDA Toolkit, the compiler can build cuda_kernel.cu to run on both Tesla devices and Fermi devices natively as shown below. This example also builds in forward-compatible PTX code.

Windows:

nvcc.exe -ccbin "C:\vs2008\VC\bin" -I"C:\CUDA\include" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT" -gencode=arch=compute_10,code=sm_10 -gencode=arch=compute_10,code=compute_10 -gencode=arch=compute_20,code=sm_20 -gencode=arch=compute_20,code=compute_20 --compile -o "Release\cuda_kernel.cu.obj" "cuda_kernel.cu"

Mac/Linux:

/usr/local/cuda/bin/nvcc -gencode=arch=compute_10,code=sm_10 -gencode=arch=compute_10,code=compute_10 -gencode=arch=compute_20,code=sm_20 -gencode=arch=compute_20,code=compute_20 --compiler-options -fno-strict-aliasing -I. -I/usr/local/cuda/include -DUNIX -O2 -o release/cuda_kernel.cu.o -c cuda_kernel.cu

Note the distinction in these command lines between the "code=sm_10" argument to -gencode, which generates CUBIN files for the specified compute capability, and the "code=compute_10" argument, which generates PTX for that compute capability.

1.3.2 My application is a CUDA Driver API application.

What steps do I need to take to support Fermi?

Answer: You have several options:

  • Compile CUDA kernel files to PTX. While CUBIN files can be generated using the compilers in the CUDA Toolkit 2.1 through 2.3, those CUBIN files are compatible only with Tesla devices, not Fermi devices.

Refer to the following GPU Computing SDK code samples for examples showing how to use the CUDA Driver API to launch PTX kernels:

  • matrixMulDrv
  • simpleTextureDrv
  • ptxjit

Use the compiler settings below to create PTX output files from your CUDA source files:

Windows:

nvcc.exe -ccbin "C:\vs2008\VC\bin" -I"C:\CUDA\include" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT" -ptx -o "cuda_kernel.ptx" "cuda_kernel.cu"

Mac/Linux:

/usr/local/cuda/bin/nvcc -ptx --compiler-options -fno-strict-aliasing -I. -I/usr/local/cuda/include -DUNIX -O2 -o cuda_kernel.ptx cuda_kernel.cu

Compile your CUDA kernels to both CUBIN and PTX output files. This must be specified explicitly at compile time, since nvcc must be called once for each generated output file of either type.

At runtime, your application will need to explicitly check the compute capability of the current GPU with the following CUDA Driver API function. Refer to the deviceQueryDrv code sample in the GPU Computing SDK for a detailed example of how to use this function.

cuDeviceComputeCapability(&major, &minor, dev)

Based on the major and minor version returned by this function, your application can choose the appropriate CUBIN or PTX version of each kernel.

To load kernels that were compiled to PTX using the CUDA Driver API, you can use code as in the following example. Calling cuModuleLoadDataEx will JIT compile your PTX source files. (Note that there are a few JIT options that developers need to be aware of to properly compile their kernels.) The GPU Computing SDK samples matrixMulDrv and simpleTextureDrv further illustrate this process.

CUmodule cuModule;
CUfunction cuFunction = 0;
string ptx_source;

// Helper function load PTX source to a string
findModulePath ("matrixMul_kernel.ptx",
module_path, argv, ptx_source));

// We specify PTXJIT compilation with parameters
const unsigned int jitNumOptions = 3;
CUjit_option *jitOptions = new CUjit_option[jitNumOptions];
void **jitOptVals = new void*[jitNumOptions];

// set up size of compilation log buffer
jitOptions[0] = CU_JIT_INFO_LOG_BUFFER_SIZE_BYTES;
int jitLogBufferSize = 1024;
jitOptVals[0] = (void *)jitLogBufferSize;

// set up pointer to the compilation log buffer
jitOptions[1] = CU_JIT_INFO_LOG_BUFFER;
char *jitLogBuffer = new char[jitLogBufferSize];
jitOptVals[1] = jitLogBuffer;

// set up pointer for Maximum # of registers
jitOptions[2] = CU_JIT_MAX_REGISTERS;
int jitRegCount = 32;
jitOptVals[2] = (void *)jitRegCount;

// Loading a module will force a PTX to be JIT
status = cuModuleLoadDataEx(&cuModule, ptx_source.c_str(),
jitNumOptions, jitOptions,
(void **)jitOptVals);
printf("> PTX JIT log:\n%s\n", jitLogBuffer);

Appendix A. Revision History

A.1 Version 1.0

Initial public release.

Notice

ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, "MATERIALS") ARE BEING PROVIDED "AS IS." NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.

Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication or otherwise under any patent or patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all information previously supplied. NVIDIA Corporation products are not authorized for use as critical components in life support devices or systems without express written approval of NVIDIA Corporation.

Trademarks

NVIDIA, the NVIDIA logo, CUDA, GeForce, NVIDIA Quadro, and Tesla are trademarks or registered trademarks of NVIDIA Corporation. Other company and product names may be trademarks of the respective companies with which they are associated.

Copyright

© 2010 NVIDIA Corporation. All rights reserved.

PDF preview unavailable. Download the PDF instead.

NVIDIA FermiCompatibilityGuide Microsoft Office Word 2007 Microsoft Office Word 2007

Related Documents

Preview NVIDIA Ampere GPU Architecture Compatibility Guide for CUDA Applications
NVIDIA's guide for developers to ensure CUDA applications are compatible with NVIDIA Ampere GPU architecture, covering verification and building strategies for CUDA Toolkit versions.
Preview CUDA Runtime API Reference Manual
Comprehensive reference manual for the NVIDIA CUDA Runtime API, detailing functions for GPU computing, device management, memory operations, stream synchronization, and more. Version January 2022.
Preview NVIDIA Tesla C2050/C2070 GPU Computing Processor Datasheet
Datasheet for NVIDIA Tesla C2050 and C2070 GPU computing processors, detailing supercomputing performance, technical specifications, features, and benefits for high-performance computing. Highlights Fermi architecture, CUDA cores, ECC memory, and PCIe Gen 2.0 data transfer.
Preview NVIDIA GeForce GTX 750 Ti Whitepaper: Maxwell Architecture for Performance per Watt
This whitepaper details the NVIDIA GeForce GTX 750 Ti graphics card, highlighting its first-generation Maxwell GPU architecture, designed for extreme performance per watt. It covers architectural enhancements, performance comparisons, memory systems, and new video capabilities.
Preview NVIDIA TensorRT Developer Guide for Deep Learning Inference Optimization
Explore the NVIDIA TensorRT Developer Guide (PG-08540-001_v8.2.0 Early Access) to learn how to optimize and deploy deep learning models for high-performance inference on NVIDIA GPUs. This comprehensive manual covers C++ and Python APIs, advanced features like quantization, dynamic shapes, custom layers, and performance best practices.
Preview NVIDIA AI Enterprise User Guide: GPU Virtualization, Deployment, and Management
Comprehensive user guide for NVIDIA AI Enterprise, detailing installation, configuration, and management of AI and data analytics workloads on virtualized GPU environments. Covers vGPU, Kubernetes, VMware vSphere, and Red Hat KVM.
Preview NVIDIA Jetson Orin Nano Super Developer Kit Datasheet
The NVIDIA Jetson Orin Nano Super Developer Kit is a compact, powerful, and affordable generative AI supercomputer for edge devices. It features an NVIDIA Ampere architecture GPU, a 6-core ARM CPU, and extensive connectivity, enabling developers, students, and makers to build next-generation AI applications in robotics, vision AI, and more. The kit includes the Jetson Orin Nano 8GB module and a reference carrier board, supported by the NVIDIA AI software stack.
Preview NVIDIA AI Enterprise User Guide: Installation, Configuration, and Management
Comprehensive user guide for NVIDIA AI Enterprise, detailing installation, configuration, and management of NVIDIA vGPU, AI frameworks, and software components across various hypervisors and operating systems.