CUDA编程

CUDA编程-原理(一)

1.简介

Graphics Processing Unit(GPU)在类似的价格和功率范围内提供比CPU更高的指令吞吐量和内存带宽。许多应用程序利用这些更高的功能在GPU上运行得比在CPU上更快。其他计算设备,如fpga,也非常节能,但提供的编程灵活性比gpu低得多。GPU和CPU在性能上的差异是因为它们的设计目标不同。虽然CPU设计善于执行的操作序列,称为一个线程,尽可能快,可以并行执行几十这些线程,GPU旨在擅长并行执行成千上万(掩盖了慢单线程性能来实现更高的吞吐量)。GPU专门用于高度并行计算,CUDA编程即学习怎么在项目中使用这GPU并行计算提高代码执行效率。

2.GPU结构

2.1.硬件

与CPU相比较而言,GPU大部分都是ALU(arithmetic and logic unit,逻辑运算单元),用于数据处理,即图中的绿色部分,CPU更擅长一些串行操作,能够同时执行数十个线程,GPU则是为了并行数千个线程而设计,所以不像CPU一样有更多级缓存(cache)和控制单元(control),下图显示了两者的区别:

The GPU Devotes More Transistors to Data Processing.

GPU由一系列的SMs(Streaming Multiprocessors)组成,而SMs是由多个SP(Streaming Processor)加上其他资源(Shared Memory,Register...)组成,SP是最基本的处理单元,每个SP能够处理一个thread,每个SM包含的SP数量依据GPU架构而不同,Fermi架构GF100是32个,GF10X是48个,Kepler架构都是192个,Maxwell都是128个。相同架构的GPU包含的SM数量则根据GPU的中高低端来定,下面是8个SM组成图:

img

2.2.软件

  • thread:线程,相当于CPU中的thread概念,使得GPU能够同时处理大量数据,如矩阵操作等
  • block:线程块,由多个线程组成,可以是一维、二维或三维
  • gird:由多个block组成的单元,可以是一维、二维或三维,在代码层面调用GPU进行运算时,以grid为单位进行

当一个CPU程序调用GPU进行计算时,会根据GPU算力将grid中的block分配到SMs中,下图显示了SMs和block之间的关系:

Automatic Scalability.

3.内核(kernel)

简单说就是调用CUDA进行并行编程的函数称为内核,CUDA C++ 扩展了 C++,允许程序员定义 C++ 函数,称为内核,当调用时,由 N 个不同的CUDA 线程并行执行 N 次,而不是像常规 C++ 函数那样只执行一次,这里就相当于对每个thread进行操作。

内核定义使用 global声明说明符和为给定内核调用执行该内核的 CUDA 线程数是使用新的 <<<...>>>执行配置语法(请参阅C++ 语言扩展)。每个执行内核的线程都被赋予一个唯一的线程 ID,该ID可通过内置变量在内核中访问(线程ID的访问将在线程层次结构中叙述)

作为说明,以下示例代码,使用内置变量线程标识, 将两个大小为N 的向量A和B相加,并将结果存储到向量C 中:

// 内核定义
__global__  void VecAdd( float * A, float * B, float * C)
{
    int i = threadIdx .x;
    C[i] = A[i] + B[i];
}

int main()
{
    ...
    // 内核调用 N 个线程
    VecAdd <<< 1, N >>> (A, B, C);
    ...
}

4.线程层次结构(Thread Hierarchy)

为了方便, 线程标识是一个 3 分量向量,因此可以使用一维、二维或三维线程索引来标识线程,形成一维、二维或三维线程块。

一个线程的索引和它的线程ID之间的关系:对于一维块,索引和ID相同;对于大小为(D x , D y )的二维块,索引为(x, y)的线程的线程 ID为(x + y D x );对于大小为(D x , D y , D z )的三维块, 索引为(x, y, z)的线程的线程 ID为(x + y D x+ z D x D y )

例如,以下代码将两个大小为NxN 的矩阵AB相加,并将结果存储到矩阵 C 中

// 内核定义
__global__  void MatAdd( float A[N][N], float B[N][N],
                        float C[N][N])
{
    // 这里有x\y两哥值是因为前面定义的threadsPerBlock为二维
    int i = threadIdx .x;
    int j = threadIdx .y;
    C[i][j] = A[i][j] + B[i][j];
}

int main()
{
    ...
    // 内核调用有 N * N * 1 个线程块
    int numBlocks = 1;
    dim3threadsPerBlock (N, N);
    MatAdd <<< numBlocks, threadsPerBlock >>> (A, B, C);
    ...
}

如果上面的不理解没事,我们来举个例子,对于threadIdx的索引因为有一维、二维、三维等多种形式,这里我们以block和thread都是二维来举例,每个grid的索引结构用下图表示,每个grid有6个block,每个block有8个thread,如何计算每个thread的坐标和index呢?

计算方式如下图所示,对全局线性内存索引公式的理解可以随便取一个数值代入索引关系图,如要求矩阵坐标为(1,1)点全局线性内存索引,ifx=1*8+1=9,即索引图中方格内数字为9的thread,这里的8即为每行的thread的个数,代码中用blockDim来表示,这里表示的是二维block里面二维thread情况,如果是三维的block,同理也有gridDim.x、gridDim.y、gridDim.z来表示x或y或z的block的多少。

为了便于理解,贴出代码(转载自https://blog.csdn.net/Felaim/article/details/104119229/)。

#include "cuda_runtime.h"
#include "device_launch_parameters.h"

#include <stdio.h>
#include <stdlib.h>
#include <iostream>

using namespace std;

//thread 1D
__global__ void testThread1(int *c, const int *a, const int *b)
{
    int i = threadIdx.x;
    c[i] = b[i] - a[i];
}

//thread 2D
__global__ void testThread2(int *c, const int *a, const int *b)
{
    int i = threadIdx.x + threadIdx.y*blockDim.x;
    c[i] = b[i] - a[i];
}

//thread 3D
__global__ void testThread3(int *c, const int *a, const int *b)
{
    int i = threadIdx.x + threadIdx.y*blockDim.x + threadIdx.z*blockDim.x*blockDim.y;
    c[i] = b[i] - a[i];
}

//block 1D
__global__ void testBlock1(int *c, const int *a, const int *b)
{
    int i = blockIdx.x;
    c[i] = b[i] - a[i];
}

//block 2D
__global__ void testBlock2(int *c, const int *a, const int *b)
{
    int i = blockIdx.x + blockIdx.y*gridDim.x;
    c[i] = b[i] - a[i];
}

//block 3D
__global__ void testBlock3(int *c, const int *a, const int *b)
{
    int i = blockIdx.x + blockIdx.y*gridDim.x + blockIdx.z*gridDim.x*gridDim.y;
    c[i] = b[i] - a[i];
}

//block-thread 1D-1D
__global__ void testBlockThread1(int *c, const int *a, const int *b)
{
    int i = threadIdx.x + blockDim.x*blockIdx.x;
    c[i] = b[i] - a[i];
}

//block-thread 1D-2D
__global__ void testBlockThread2(int *c, const int *a, const int *b)
{
    int threadId_2D = threadIdx.x + threadIdx.y*blockDim.x;
    int i = threadId_2D+ (blockDim.x*blockDim.y)*blockIdx.x;
    c[i] = b[i] - a[i];
}

//block-thread 1D-3D
__global__ void testBlockThread3(int *c, const int *a, const int *b)
{
    int threadId_3D = threadIdx.x + threadIdx.y*blockDim.x + threadIdx.z*blockDim.x*blockDim.y;
    int i = threadId_3D + (blockDim.x*blockDim.y*blockDim.z)*blockIdx.x;
    c[i] = b[i] - a[i];
}

//block-thread 2D-1D
__global__ void testBlockThread4(int *c, const int *a, const int *b)
{
    int blockId_2D = blockIdx.x + blockIdx.y*gridDim.x;
    int i = threadIdx.x + blockDim.x*blockId_2D;
    c[i] = b[i] - a[i];
}

//block-thread 3D-1D
__global__ void testBlockThread5(int *c, const int *a, const int *b)
{
    int blockId_3D = blockIdx.x + blockIdx.y*gridDim.x + blockIdx.z*gridDim.x*gridDim.y;
    int i = threadIdx.x + blockDim.x*blockId_3D;
    c[i] = b[i] - a[i];
}

//block-thread 2D-2D
__global__ void testBlockThread6(int *c, const int *a, const int *b)
{
    int threadId_2D = threadIdx.x + threadIdx.y*blockDim.x;
    int blockId_2D = blockIdx.x + blockIdx.y*gridDim.x;
    int i = threadId_2D + (blockDim.x*blockDim.y)*blockId_2D;
    c[i] = b[i] - a[i];
}

//block-thread 2D-3D
__global__ void testBlockThread7(int *c, const int *a, const int *b)
{
    int threadId_3D = threadIdx.x + threadIdx.y*blockDim.x + threadIdx.z*blockDim.x*blockDim.y;
    int blockId_2D = blockIdx.x + blockIdx.y*gridDim.x;
    int i = threadId_3D + (blockDim.x*blockDim.y*blockDim.z)*blockId_2D;
    c[i] = b[i] - a[i];
}

//block-thread 3D-2D
__global__ void testBlockThread8(int *c, const int *a, const int *b)
{
    int threadId_2D = threadIdx.x + threadIdx.y*blockDim.x;
    int blockId_3D = blockIdx.x + blockIdx.y*gridDim.x + blockIdx.z*gridDim.x*gridDim.y;
    int i = threadId_2D + (blockDim.x*blockDim.y)*blockId_3D;
    c[i] = b[i] - a[i];
}

//block-thread 3D-3D
__global__ void testBlockThread9(int *c, const int *a, const int *b)
{
    int threadId_3D = threadIdx.x + threadIdx.y*blockDim.x + threadIdx.z*blockDim.x*blockDim.y;
    int blockId_3D = blockIdx.x + blockIdx.y*gridDim.x + blockIdx.z*gridDim.x*gridDim.y;
    int i = threadId_3D + (blockDim.x*blockDim.y*blockDim.z)*blockId_3D;
    c[i] = b[i] - a[i];
}


void addWithCuda(int *c, const int *a, const int *b, unsigned int size)
{
    int *dev_a = 0;
    int *dev_b = 0;
    int *dev_c = 0;

    cudaSetDevice(0);

    cudaMalloc((void**)&dev_c, size * sizeof(int));
    cudaMalloc((void**)&dev_a, size * sizeof(int));
    cudaMalloc((void**)&dev_b, size * sizeof(int));

    cudaMemcpy(dev_a, a, size * sizeof(int), cudaMemcpyHostToDevice);
    cudaMemcpy(dev_b, b, size * sizeof(int), cudaMemcpyHostToDevice);

    //testThread1<<<1, size>>>(dev_c, dev_a, dev_b);

    //uint3 s;s.x = size/5;s.y = 5;s.z = 1;
    //testThread2 <<<1,s>>>(dev_c, dev_a, dev_b);

    //uint3 s; s.x = size / 10; s.y = 5; s.z = 2;
    //testThread3<<<1, s >>>(dev_c, dev_a, dev_b);

    //testBlock1<<<size,1 >>>(dev_c, dev_a, dev_b);

    //uint3 s; s.x = size / 5; s.y = 5; s.z = 1;
    //testBlock2<<<s, 1 >>>(dev_c, dev_a, dev_b);

    //uint3 s; s.x = size / 10; s.y = 5; s.z = 2;
    //testBlock3<<<s, 1 >>>(dev_c, dev_a, dev_b);

    //testBlockThread1<<<size/10, 10>>>(dev_c, dev_a, dev_b);

    //uint3 s1; s1.x = size / 100; s1.y = 1; s1.z = 1;
    //uint3 s2; s2.x = 10; s2.y = 10; s2.z = 1;
    //testBlockThread2 << <s1, s2 >> >(dev_c, dev_a, dev_b);

    //uint3 s1; s1.x = size / 100; s1.y = 1; s1.z = 1;
    //uint3 s2; s2.x = 10; s2.y = 5; s2.z = 2;
    //testBlockThread3 << <s1, s2 >> >(dev_c, dev_a, dev_b);

    //uint3 s1; s1.x = 10; s1.y = 10; s1.z = 1;
    //uint3 s2; s2.x = size / 100; s2.y = 1; s2.z = 1;
    //testBlockThread4 << <s1, s2 >> >(dev_c, dev_a, dev_b);

    //uint3 s1; s1.x = 10; s1.y = 5; s1.z = 2;
    //uint3 s2; s2.x = size / 100; s2.y = 1; s2.z = 1;
    //testBlockThread5 << <s1, s2 >> >(dev_c, dev_a, dev_b);

    //uint3 s1; s1.x = size / 100; s1.y = 10; s1.z = 1;
    //uint3 s2; s2.x = 5; s2.y = 2; s2.z = 1;
    //testBlockThread6 << <s1, s2 >> >(dev_c, dev_a, dev_b);

    //uint3 s1; s1.x = size / 100; s1.y = 5; s1.z = 1;
    //uint3 s2; s2.x = 5; s2.y = 2; s2.z = 2;
    //testBlockThread7 << <s1, s2 >> >(dev_c, dev_a, dev_b);

    //uint3 s1; s1.x = 5; s1.y = 2; s1.z = 2;
    //uint3 s2; s2.x = size / 100; s2.y = 5; s2.z = 1;
    //testBlockThread8 <<<s1, s2 >>>(dev_c, dev_a, dev_b);

    uint3 s1; s1.x = 5; s1.y = 2; s1.z = 2;
    uint3 s2; s2.x = size / 200; s2.y = 5; s2.z = 2;
    testBlockThread9<<<s1, s2 >>>(dev_c, dev_a, dev_b);

    cudaMemcpy(c, dev_c, size*sizeof(int), cudaMemcpyDeviceToHost);

    cudaFree(dev_a);
    cudaFree(dev_b);
    cudaFree(dev_c);

    cudaGetLastError();
}


int main()
{
    const int n = 1000;

    int *a = new int[n];
    int *b = new int[n];
    int *c = new int[n];
    int *cc = new int[n];

    for (int i = 0; i < n; i++)
    {
        a[i] = rand() % 100;
        b[i] = rand() % 100;
        c[i] = b[i] - a[i];
    }

    addWithCuda(cc, a, b, n);

    FILE *fp = fopen("out.txt", "w");
    for (int i = 0; i < n; i++)
        fprintf(fp, "%d %d\n", c[i], cc[i]);
    fclose(fp);

    bool flag = true;
    for (int i = 0; i < n; i++)
    {
        if (c[i] != cc[i])
        {
            flag = false;
            break;
        }
    }

    if (flag == false)
        printf("no pass");
    else
        printf("pass");

    cudaDeviceReset();

    delete[] a;
    delete[] b;
    delete[] c;
    delete[] cc;

    getchar();
    return 0;
}

每个块的线程数是有限制的,因为一个块的所有线程都应该驻留在同一个处理器内核上,并且必须共享该内核的有限内存资源。在不同的 GPU 上,一个线程块最多可以包含 线程数量不同。

但是,一个内核可以由多个形状相同的线程块执行,因此总线程数等于每个块的线程数乘以块数。

每个grid中线程块的大小应该设置成32的倍数,因为在运行程序时是以wrap为单位进行的,一个wrap有32个thread,例如程序需要使用50个thread,但是运行时也会分配2*32个thread,grid和block的最大数量都跟计算能力有关,而计算能力跟GPU型号相关,如GTX2080TI的计算能力是7.5,每个GPU的最大grid数量为128个,每个grid中block的x-dimension最大为2^31-1,y-dimension或z-dimension最大为65535。

CUDA计算能力和grid、block最大值查询地址

grid、block、thread关系如下图所示:

举例:A矩阵与B矩阵的值相加,储存在C矩阵中,这里block和thread都是二维的:

// Kernel definition
__global__ void MatAdd(float A[N][N], float B[N][N],
float C[N][N])
{
    int i = blockIdx.x * blockDim.x + threadIdx.x;
    int j = blockIdx.y * blockDim.y + threadIdx.y;
    if (i < N && j < N)
        C[i][j] = A[i][j] + B[i][j];
}

int main()
{
    ...
    // Kernel invocation
    dim3 threadsPerBlock(16, 16);
    dim3 numBlocks(N / threadsPerBlock.x, N / threadsPerBlock.y);
    MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C);
    ...
}

5.异构编程(Heterogeneous Programming)

如下图所示,CUDA编程模型假定CUDA线程的物理上分开的,例如,当内核在 GPU 上执行而 C++ 程序的其余部分在 CPU 上执行时就是这种情况。

  • CUDA程序执行步骤:
    1. 将数据从主机(CPU)内存转移到设备(GPU)内存
    2. 设备(GPU)进行运算并将结果保存在设备(GPU)内存
    3. 将结果从设备(GPU)内存拷贝到主机(CPU)内存

CUDA 编程模型还假设主机和设备都在 DRAM 中维护各自独立的内存空间,分别称为主机内存和设备内存。因此,程序通过调用 CUDA runtime API接口来管理内核可见的全局、常量和线程内存空间。这包括设备内存分配和释放以及主机和设备内存之间的数据传输。

Heterogeneous Programming.

6.streams

在c++代码调用CUDA核函数的时候经常会遇到stream参数,stream表示一个GPU队列,队列里任务能够被顺序执行,不同的stream流可以同时执行,同时具有以下两个特点:

  • 数值拷贝和数值计算可以同时进行
  • CPU到GPU和GPU到CPU上两个方向的拷贝可以同时进行

这样的特性,与不使用stream流行相比处理能力有了提高,如下图(参考地址):

img

下面代码展示了创建两个stream和在页锁定内存(page-locked memory)中分类一个float类型的指针,页锁定内存能够加快在CPU和GPU之间的数据传输

cudaStream_t stream[2];
for (int i = 0; i < 2; ++i)
    cudaStreamCreate(&stream[i]);
float* hostPtr;
cudaMallocHost(&hostPtr, 2 * size);

下面部分代码展示在stream流中进行:

1. 使用cudaMemcpyAsync将输入数组hostPtr部分数据从Host拷贝到device的inputDevPtr
2. MyKernel核函数执行,传入每个grid100个block,每个block有512个thread,每个block除静态内存外,动态分配的内存数,默认为0,stream[i]表示用第几个stream流
3. 使用cudaMemcpyAsync将device的inputDevPtr数据拷贝到Host的hostPtr
for (int i = 0; i < 2; ++i) {
    cudaMemcpyAsync(inputDevPtr + i * size, hostPtr + i * size,
                    size, cudaMemcpyHostToDevice, stream[i]);
    MyKernel <<<100, 512, 0, stream[i]>>>
          (outputDevPtr + i * size, inputDevPtr + i * size, size);
    cudaMemcpyAsync(hostPtr + i * size, outputDevPtr + i * size,
                    size, cudaMemcpyDeviceToHost, stream[i]);
}

找了一些比较典型的代码来比较两者的区别,在经过stream流之后性能发生了明显提升,参考地址:

未使用stream流直接进行处理


#include "cuda_runtime.h"  
#include <iostream>
#include <stdio.h>  
#include <math.h>  
 
#define N (1024*1024)  
#define FULL_DATA_SIZE N*20  
 
__global__ void kernel(int* a, int *b, int*c)
{
	int threadID = blockIdx.x * blockDim.x + threadIdx.x;
 
	if (threadID < N)
	{
		c[threadID] = (a[threadID] + b[threadID]) / 2;
	}
}
 
int main()
{
	//启动计时器
	cudaEvent_t start, stop;
	float elapsedTime;
	cudaEventCreate(&start);
	cudaEventCreate(&stop);
	cudaEventRecord(start, 0);
 
	int *host_a, *host_b, *host_c;
	int *dev_a, *dev_b, *dev_c;
 
	//在GPU上分配内存
	cudaMalloc((void**)&dev_a, FULL_DATA_SIZE * sizeof(int));
	cudaMalloc((void**)&dev_b, FULL_DATA_SIZE * sizeof(int));
	cudaMalloc((void**)&dev_c, FULL_DATA_SIZE * sizeof(int));
 
	//在CPU上分配可分页内存
	host_a = (int*)malloc(FULL_DATA_SIZE * sizeof(int));
	host_b = (int*)malloc(FULL_DATA_SIZE * sizeof(int));
	host_c = (int*)malloc(FULL_DATA_SIZE * sizeof(int));
 
	//主机上的内存赋值
	for (int i = 0; i < FULL_DATA_SIZE; i++)
	{
		host_a[i] = i;
		host_b[i] = FULL_DATA_SIZE - i;
	}
 
	//从主机到设备复制数据
	cudaMemcpy(dev_a, host_a, FULL_DATA_SIZE * sizeof(int), cudaMemcpyHostToDevice);
	cudaMemcpy(dev_b, host_b, FULL_DATA_SIZE * sizeof(int), cudaMemcpyHostToDevice);
 
	kernel << <FULL_DATA_SIZE / 1024, 1024 >> > (dev_a, dev_b, dev_c);
 
	//数据拷贝回主机
	cudaMemcpy(host_c, dev_c, FULL_DATA_SIZE * sizeof(int), cudaMemcpyDeviceToHost);
 
	//计时结束
	cudaEventRecord(stop, 0);
	cudaEventSynchronize(stop);
	cudaEventElapsedTime(&elapsedTime, start, stop);
 
	std::cout << "消耗时间: " << elapsedTime << std::endl;
 
	//输出前10个结果
	for (int i = 0; i < 10; i++)
	{
		std::cout << host_c[i] << std::endl;
	}
 
	getchar();
 
	cudaFreeHost(host_a);
	cudaFreeHost(host_b);
	cudaFreeHost(host_c);
 
	cudaFree(dev_a);
	cudaFree(dev_b);
	cudaFree(dev_c);
 
	return 0;
}

使用stream流进行处理

#include "cuda_runtime.h"  
#include <iostream>
#include <stdio.h>  
#include <math.h>  
 
#define N (1024*1024)  
#define FULL_DATA_SIZE N*20  
 
__global__ void kernel(int* a, int *b, int*c)
{
	int threadID = blockIdx.x * blockDim.x + threadIdx.x;
 
	if (threadID < N)
	{
		c[threadID] = (a[threadID] + b[threadID]) / 2;
	}
}
 
int main()
{
	//获取设备属性
	cudaDeviceProp prop;
	int deviceID;
	cudaGetDevice(&deviceID);
	cudaGetDeviceProperties(&prop, deviceID);
 
	//检查设备是否支持重叠功能
	if (!prop.deviceOverlap)
	{
		printf("No device will handle overlaps. so no speed up from stream.\n");
		return 0;
	}
 
	//启动计时器
	cudaEvent_t start, stop;
	float elapsedTime;
	cudaEventCreate(&start);
	cudaEventCreate(&stop);
	cudaEventRecord(start, 0);
 
	//创建一个CUDA流
	cudaStream_t stream;
	cudaStreamCreate(&stream);
 
	int *host_a, *host_b, *host_c;
	int *dev_a, *dev_b, *dev_c;
 
	//在GPU上分配内存
	cudaMalloc((void**)&dev_a, N * sizeof(int));
	cudaMalloc((void**)&dev_b, N * sizeof(int));
	cudaMalloc((void**)&dev_c, N * sizeof(int));
 
	//在CPU上分配页锁定内存
	cudaHostAlloc((void**)&host_a, FULL_DATA_SIZE * sizeof(int), cudaHostAllocDefault);
	cudaHostAlloc((void**)&host_b, FULL_DATA_SIZE * sizeof(int), cudaHostAllocDefault);
	cudaHostAlloc((void**)&host_c, FULL_DATA_SIZE * sizeof(int), cudaHostAllocDefault);
 
	//主机上的内存赋值
	for (int i = 0; i < FULL_DATA_SIZE; i++)
	{
		host_a[i] = i;
		host_b[i] = FULL_DATA_SIZE - i;
	}
 
	for (int i = 0; i < FULL_DATA_SIZE; i += N)
	{
		cudaMemcpyAsync(dev_a, host_a + i, N * sizeof(int), cudaMemcpyHostToDevice, stream);
		cudaMemcpyAsync(dev_b, host_b + i, N * sizeof(int), cudaMemcpyHostToDevice, stream);
 
		kernel << <N / 1024, 1024, 0, stream >> > (dev_a, dev_b, dev_c);
 
		cudaMemcpyAsync(host_c + i, dev_c, N * sizeof(int), cudaMemcpyDeviceToHost, stream);
	}
 
	// wait until gpu execution finish  
	cudaStreamSynchronize(stream);
 
	cudaEventRecord(stop, 0);
	cudaEventSynchronize(stop);
	cudaEventElapsedTime(&elapsedTime, start, stop);
 
	std::cout << "消耗时间: " << elapsedTime << std::endl;
 
	//输出前10个结果
	for (int i = 0; i < 10; i++)
	{
		std::cout << host_c[i] << std::endl;
	}
 
	getchar();
 
	// free stream and mem  
	cudaFreeHost(host_a);
	cudaFreeHost(host_b);
	cudaFreeHost(host_c);
 
	cudaFree(dev_a);
	cudaFree(dev_b);
	cudaFree(dev_c);
 
	cudaStreamDestroy(stream);
	return 0;
}

7.总结

本篇文章主要讲了CUDA编程的一些重要原理以及在CUDA编程中经常要使用到的一些概念和操作,如stream流和threadIdx的计算,当然在实际应用中也会结合很多场景去使用,如果想要详细去了解可以看看官方文档(地址在简介中),下一篇将会结合CUDA编程实现TensorRT的DCNv2插件来进行实战,敬请期待。

posted @   热风丶1921  阅读(694)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· DeepSeek 开源周回顾「GitHub 热点速览」
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· 单线程的Redis速度为什么快?
点击右上角即可分享
微信分享提示