CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops

https://devblogs.nvidia.com/cuda-pro-tip-write-flexible-kernels-grid-stride-loops/

 

One of the most common tasks in CUDA programming is to parallelize a loop using a kernel. As an example, let’s use our old friend SAXPY. Here’s the basic sequential implementation, which uses a for loop. To efficiently parallelize this, we need to launch enough threads to fully utilize the GPU.

CUDA编程最常见的任务之一就是用一个kernel来并行化一个循环。比如,对于我们老朋友SAXPY,下面是一个基础的使用循环的实现。为了效率地并行化它,我们需要运行大量的线程来充分利用GPU。

void saxpy(int n, float a, float *x, float *y)
{
    for (int i = 0; i < n; ++i)
        y[i] = a * x[i] + y[i];
}

Common CUDA guidance is to launch one thread per data element, which means to parallelize the above SAXPY loop we write a kernel that assumes we have enough threads to more than cover the array size.

通常CUDA指引会为每一个数据元素运行一个线程,意味着要并行化上述的SAXPY循环,我们需要假设我们写的kernel要有足够的线程以满足数组的大小。

__global__
void saxpy(int n, float a, float *x, float *y)
{
    int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < n) 
        y[i] = a * x[i] + y[i];
}

I’ll refer to this style of kernel as a monolithic kernel, because it assumes a single large grid of threads to process the entire array in one pass. You might use the following code to launch the saxpy kernel to process one million elements.

我称这类kernel为monolithic kernel,因为它假设存在单个大的线程网格在一次同时处理,运行整个数组运算。你需要用下面的代码来运行一个具有百万元素的saxpy kernel

// Perform SAXPY on 1M elements
saxpy<<<4096,256>>>(1<<20, 2.0, x, y);

Instead of completely eliminating the loop when parallelizing the computation, I recommend to use a grid-stride loop, as in the following kernel.

 相比在并行化计算时完全消去循环,我更推荐使用一种grid-stride loop,如下

__global__
void saxpy(int n, float a, float *x, float *y)
{
    for (int i = blockIdx.x * blockDim.x + threadIdx.x; 
         i < n; 
         i += blockDim.x * gridDim.x) 
      {
          y[i] = a * x[i] + y[i];
      }
}

Rather than assume that the thread grid is large enough to cover the entire data array, this kernel loops over the data array one grid-size at a time.

 比起假设线程网格足够大得覆盖整个数组,这个kernel运行一次,就对数组进行一个grid-size的循环。

Notice that the stride of the loop is blockDim.x * gridDim.x which is the total number of threads in the grid. So if there are 1280 threads in the grid, thread 0 will compute elements 0, 1280, 2560, etc. This is why I call this a grid-stride loop. By using a loop with stride equal to the grid size, we ensure that all addressing within warps is unit-stride, so we get maximum memory coalescing, just as in the monolithic version.

注意到这个循环的跨度是 blockDim.x * gridDim.x,它是一个线程网格中所有线程的数量。如果该线程网格中有1280个线程,那么编号为0的线程将执行元素0,1280,2560……这就是为什么我称之为“grid-stride loop”。使用一个跨度等于网格大小的循环,我们可以保证了所有地址都是unit-stride的,于是我们比起monolithic的版本减少了最大的内存消耗。

When launched with a grid large enough to cover all iterations of the loop, the grid-stride loop should have essentially the same instruction cost as the ifstatement in the monolithic kernel, because the loop increment will only be evaluated when the loop condition evaluates to true.

grid-stride循环比起monolithic kernel,也会需要相同的计算消耗在if语句上,因为循环的条件为真时循环才会继续进行(在这里隐式地产生了if的消耗)。

 

There are several benefits to using a grid-stride loop.

1.Scalability and thread reuse. By using a loop, you can support any problem size even if it exceeds the largest grid size your CUDA device supports. Moreover, you can limit the number of blocks you use to tune performance. For example, it’s often useful to launch a number of blocks that is a multiple of the number of multiprocessors on the device, to balance utilization. As an example, we might launch the loop version of the kernel like this.

1.稳定性及线程复用。当使用一个循环,你可以支持任何显存大小的运算甚至包括它超出了CUDA设备(一次性)支持的最大值。除此之外,你可以限制线程块数量来调整运行效率。比如,为平衡资源使用,载入一定数量的具有不同multiprocessors的线程块,是非常有用的。

int numSMs;
cudaDeviceGetAttribute(&numSMs, cudaDevAttrMultiProcessorCount, devId);
// Perform SAXPY on 1M elements
saxpy<<<32*numSMs, 256>>>(1 << 20, 2.0, x, y);

 

posted @ 2018-03-22 14:14  猫薄荷喂狗  阅读(530)  评论(0编辑  收藏  举报