在gpu上使用cuda 运行 一个线程,为什么gpu负载如此之高?

using cuda to run one thread on gpu, why was the gpu load so high?

以下是我的GPU信息:

Device 0: "GeForce GT 440"
  CUDA Driver Version / Runtime Version          7.0 / 7.0
  CUDA Capability Major/Minor version number:    2.1
  Total amount of global memory:                 1536 MBytes (1610612736 bytes)
  ( 3) Multiprocessors, ( 48) CUDA Cores/MP:     144 CUDA Cores
  GPU Max Clock rate:                            1189 MHz (1.19 GHz)
  Memory Clock rate:                             800 Mhz
  Memory Bus Width:                              192-bit
  L2 Cache Size:                                 393216 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65535),
3D=(2048, 2048, 2048)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (65535, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Mo
del)
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:

cuda代码很简单:

__global__ void kernel(float *d_data)
{
    *d_data = -1;
    *d_data = 1/(*d_data);
    *d_data = (*d_data) / (*d_data);
}
int main()
{
    float *d_data;
    cudaMalloc(&d_data, sizeof(float));
    while (1)
        kernel << <1, 1 >> >(d_data);
    float data;
    cudaMemcpy(&data, d_data, sizeof(int), cudaMemcpyDeviceToHost);
    printf("%f\n",data);
    return 0;
}

然后 运行 代码,我从 GPU-Z 得到的 gpu 负载是 99%!!

GPU-Z:http://www.techpowerup.com/gpuz/

我错过了什么吗?如何理解gpu负载?

GPU "load" 只是 gpu 忙碌的时间除以总时间间隔的比例的度量。

因此,如果您的程序 运行s 持续 1.0 秒,而内核需要 0.8 秒到 运行,该间隔的 GPU 负载将为 80%。对于 GPU-Z,由于这个数字是定期更新的,如果您的内核在整个更新期间都处于 运行ning 状态,它将看起来大约 100% 忙。

因为对于您给定的代码,您的内核一直处于 运行ning 状态,GPU 负载应该接近 100%。内核在做什么并不重要。如果内核是 运行ning,则 GPU 很忙,这就是负载的测量方式。