Python Bindings for the NVIDIA Management Library
Provides a Python interface to GPU management and monitoring functions.
This is a wrapper around the NVML library. For information about the NVML library, see the NVML developer page http://developer.nvidia.com/nvidia-management-library-nvml
As of version 11.0.0, the NVML-wrappers used in pynvml are identical to those published through nvidia-ml-py.
Note that this file can be run with 'python -m doctest -v README.txt' although the results are system dependent
Python 3, or an earlier version with the ctypes module.
pip install .
You can use the lower level nvml bindings
>>> from pynvml import *
>>> nvmlInit()
>>> print("Driver Version:", nvmlSystemGetDriverVersion())
Driver Version: 410.00
>>> deviceCount = nvmlDeviceGetCount()
>>> for i in range(deviceCount):
... handle = nvmlDeviceGetHandleByIndex(i)
... print("Device", i, ":", nvmlDeviceGetName(handle))
...
Device 0 : Tesla V100
>>> nvmlShutdown()
Or the higher level nvidia_smi API
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
nvsmi.DeviceQuery('memory.free, memory.total')
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
print(nvsmi.DeviceQuery('--help-query-gpu'), end='\n')
Python methods wrap NVML functions, implemented in a C shared library. Each function's use is the same with the following exceptions:
Instead of returning error codes, failing error codes are raised as Python exceptions.
>>> try:
... nvmlDeviceGetCount()
... except NVMLError as error:
... print(error)
...
Uninitialized
C function output parameters are returned from the corresponding Python function left to right.
nvmlReturn_t nvmlDeviceGetEccMode(nvmlDevice_t device,
nvmlEnableState_t *current,
nvmlEnableState_t *pending);
>>> nvmlInit()
>>> handle = nvmlDeviceGetHandleByIndex(0)
>>> (current, pending) = nvmlDeviceGetEccMode(handle)
C structs are converted into Python classes.
nvmlReturn_t DECLDIR nvmlDeviceGetMemoryInfo(nvmlDevice_t device,
nvmlMemory_t *memory);
typedef struct nvmlMemory_st {
unsigned long long total;
unsigned long long free;
unsigned long long used;
} nvmlMemory_t;
>>> info = nvmlDeviceGetMemoryInfo(handle)
>>> print "Total memory:", info.total
Total memory: 5636292608
>>> print "Free memory:", info.free
Free memory: 5578420224
>>> print "Used memory:", info.used
Used memory: 57872384
Python handles string buffer creation.
nvmlReturn_t nvmlSystemGetDriverVersion(char* version,
unsigned int length);
>>> version = nvmlSystemGetDriverVersion();
>>> nvmlShutdown()
For usage information see the NVML documentation.
All meaningful NVML constants and enums are exposed in Python.
The NVML_VALUE_NOT_AVAILABLE constant is not used. Instead None is mapped to the field.
Many of the pynvml
wrappers assume that the underlying NVIDIA Management Library (NVML) API can be used without admin/root privileges. However, it is certainly possible for the system permissions to prevent pynvml from querying GPU performance counters. For example:
$ nvidia-smi nvlink -g 0
GPU 0: Tesla V100-SXM2-32GB (UUID: GPU-96ab329d-7a1f-73a8-a9b7-18b4b2855f92)
NVML: Unable to get the NvLink link utilization counter control for link 0: Insufficient Permissions
A simple way to check the permissions status is to look for RmProfilingAdminOnly
in the driver params
file (Note that RmProfilingAdminOnly == 1
means that admin/sudo access is required):
$ cat /proc/driver/nvidia/params | grep RmProfilingAdminOnly
RmProfilingAdminOnly: 1
For more information on setting/unsetting the relevant admin privileges, see these notes on resolving ERR_NVGPUCTRPERM
errors.