Apple's OpenCL——再谈Local Memory (转)

http://blog.csdn.net/zenny_chen/article/details/6159746

在OpenCL中,用__local(或local)修饰的变量会被存放在一个计算单元(Compute Unit)的共享存储器区域中。对于nVidia的GPU,一个CU可以被映射为物理上的一块SM(Stream Multiprocessor);而对于AMD-ATi的GPU可以被映射为物理上的一块SIMD。不管是SM也好,SIMD也罢,它们都有一个在本计算单元中被所有线程(OpenCL中称为Work Item)所共享的共享存储器。因此,在一个计算单元内,可以通过local shared memory来同步此计算单元内的所有工作项。

这里必须注意的是在计算单元之间的线程的通信只能通过全局存储器进行,因为每个计算单元之间是没有共享存储器的,呵呵。

 

下面我将证明Apple的OpenCL实现中,如果有两个Work Group(一个Work Group的处理交给一个计算单元执行),那么这两个Work Group正好能分别被映射到一个计算单元内。我用的是Mac Mini,GPU为GeForce 9400M,所有仅有两个SM,呵呵。

下面先给出kernel代码:

  1. __kernel void solve_sum( 
  2.                     __global volatile unsigned buffer[512], 
  3.                     __global unsigned dest[512] 
  4.                     ) 
  5.     __local volatileint flag = 0; 
  6.      
  7.     size_t gid = get_global_id(0); 
  8.      
  9.     if(0 <= gid && gid < 32) 
  10.     { 
  11.         while(flag != 1); 
  12.         flag = 0; 
  13.          
  14.         buffer[gid] = 0x1UL; 
  15.         //write_mem_fence(CLK_GLOBAL_MEM_FENCE); 
  16.     } 
  17.     elseif(32 <= gid && gid < 64) 
  18.     { 
  19.         flag = 1; 
  20.          
  21.         while(flag != 0); 
  22.         unsigned ret = buffer[31 + 32 - gid]; 
  23.         dest[gid - 32] = ret; 
  24.     } 

 

上面这个内核程序的配置为:分为两个工作组;每组32个工作项。这样,两个工作组能进不同的SM。各位在执行这段代码时会发生死循环。然后等2到3秒后程序会自动退出,这点不用担心,呵呵。原因就是两个SM的共享变量flag是各有各的一份。假定,线程0到线程31进SM0,那么SM0的所有线程共享这个flag变量;而线程32到线程63进SM1,那么SM1的flag将被SM1的所有线程共享。而如果企图把这个(其实是两个)共享变量用于两个SM之间的通信,显然是无法成功的,呵呵。尽管代码上只写了一个flag,但实际上却有两个副本。

 

下面提供主机端代码:

  1. #import <Foundation/Foundation.h> 
  2. #include <OpenCL/opencl.h> 
  3. static unsigned __attribute__((aligned(16))) buffer[512] = { 0 };    // original data set given to device 
  4. static unsigned __attribute__((aligned(16))) dest[512] = { 0 }; 
  5. int opencl_execution(void
  6.     int err;                            // error code returned from api calls 
  7.      
  8.     size_t local;                       // local domain size for our calculation 
  9.      
  10.     cl_platform_id  platform_id;        // added by zenny_chen 
  11.     cl_device_id device_id;             // compute device id  
  12.     cl_context context;                 // compute context 
  13.     cl_command_queue commands;          // compute command queue 
  14.     cl_program program;                 // compute program 
  15.     cl_kernel kernel;                   // compute kernel 
  16.      
  17.     cl_mem memOrg, memDst;                      // device memory used for the input array 
  18.      
  19.      
  20.     // Create a platform 
  21.     err = clGetPlatformIDs(1, &platform_id, NULL); 
  22.     if (err != CL_SUCCESS) 
  23.     { 
  24.         printf("Error: Failed to create a platform!/n"); 
  25.         return EXIT_FAILURE; 
  26.     } 
  27.      
  28.     // Connect to a compute device 
  29.     // 
  30.     err = clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, 1, &device_id, NULL); 
  31.     if (err != CL_SUCCESS) 
  32.     { 
  33.         printf("Error: Failed to create a device group!/n"); 
  34.         return EXIT_FAILURE; 
  35.     } 
  36.      
  37.     // Create a compute context  
  38.     // 
  39.     context = clCreateContext((cl_context_properties[]){(cl_context_properties)CL_CONTEXT_PLATFORM, (cl_context_properties)platform_id, 0}, 1, &device_id, NULL, NULL, &err); 
  40.     if (!context) 
  41.     { 
  42.         printf("Error: Failed to create a compute context!/n"); 
  43.         return EXIT_FAILURE; 
  44.     } 
  45.      
  46.     // Create a command commands 
  47.     // 
  48.     commands = clCreateCommandQueue(context, device_id, 0, &err); 
  49.     if (!commands) 
  50.     { 
  51.         printf("Error: Failed to create a command commands!/n"); 
  52.         return EXIT_FAILURE; 
  53.     } 
  54.      
  55.     // Fetch kernel source 
  56.     NSString *filepath = [[NSBundle mainBundle] pathForResource:@"kernel" ofType:@"cl"]; 
  57.     if(filepath == NULL) 
  58.     { 
  59.         puts("Source not found!"); 
  60.         return EXIT_FAILURE; 
  61.     } 
  62.      
  63.     constchar *KernelSource = (constchar*)[[NSString stringWithContentsOfFile:filepath encoding:NSUTF8StringEncoding error:nil] UTF8String]; 
  64.      
  65.     // Create the compute program from the source buffer 
  66.     // 
  67.     program = clCreateProgramWithSource(context, 1, (constchar **) & KernelSource, NULL, &err); 
  68.     if (!program) 
  69.     { 
  70.         printf("Error: Failed to create compute program!/n"); 
  71.         return EXIT_FAILURE; 
  72.     } 
  73.      
  74.     // Build the program executable 
  75.     // 
  76.     err = clBuildProgram(program, 0, NULL, NULL, NULL, NULL); 
  77.     if (err != CL_SUCCESS) 
  78.     { 
  79.         size_t len; 
  80.         char buffer[2048]; 
  81.          
  82.         printf("Error: Failed to build program executable!/n"); 
  83.         clGetProgramBuildInfo(program, device_id, CL_PROGRAM_BUILD_LOG, sizeof(buffer), buffer, &len); 
  84.         printf("%s/n", buffer); 
  85.         exit(1); 
  86.     } 
  87.      
  88.     // Create the compute kernel in the program we wish to run 
  89.     // 
  90.     kernel = clCreateKernel(program, "solve_sum", &err); 
  91.     if (!kernel || err != CL_SUCCESS) 
  92.     { 
  93.         printf("Error: Failed to create compute kernel!/n"); 
  94.         exit(1); 
  95.     } 
  96.      
  97.     // Create the input and output arrays in device memory for our calculation 
  98.     // 
  99.     memOrg = clCreateBuffer(context, CL_MEM_READ_WRITE, sizeof(int) * 512, NULL, NULL); 
  100.     memDst = clCreateBuffer(context, CL_MEM_WRITE_ONLY, sizeof(int) * 512, NULL, NULL); 
  101.      
  102.     if (memOrg == NULL || memDst == NULL) 
  103.     { 
  104.         printf("Error: Failed to allocate device memory!/n"); 
  105.         exit(1); 
  106.     }     
  107.      
  108.     // Write our data set into the input array in device memory  
  109.     // 
  110.     err = clEnqueueWriteBuffer(commands, memOrg, CL_TRUE, 0, sizeof(int) * 512, buffer, 0, NULL, NULL); 
  111.     if (err != CL_SUCCESS) 
  112.     { 
  113.         printf("Error: Failed to write to source array!/n"); 
  114.         exit(1); 
  115.     } 
  116.      
  117.     // Set the arguments to our compute kernel 
  118.     // 
  119.     err = 0; 
  120.     err = clSetKernelArg(kernel, 0, sizeof(cl_mem), &memOrg); 
  121.     err |= clSetKernelArg(kernel, 1, sizeof(cl_mem), &memDst); 
  122.     if (err != CL_SUCCESS) 
  123.     { 
  124.         printf("Error: Failed to set kernel arguments! %d/n", err); 
  125.         exit(1); 
  126.     } 
  127.      
  128.     // Get the maximum work group size for executing the kernel on the device 
  129.     // 
  130.     err = clGetKernelWorkGroupInfo(kernel, device_id, CL_KERNEL_WORK_GROUP_SIZE, sizeof(local), &local, NULL); 
  131.     if (err != CL_SUCCESS) 
  132.     { 
  133.         printf("Error: Failed to retrieve kernel work group info! %d/n", err); 
  134.         exit(1); 
  135.     } 
  136.     else 
  137.         printf("The number of work items in a work group is: %lu/r/n", local); 
  138.      
  139.     // Execute the kernel over the entire range of our 1d input data set 
  140.     // using the maximum number of work group items for this device 
  141.     // 
  142.      
  143.     err = clEnqueueNDRangeKernel(commands, kernel, 1, NULL, (size_t[]){ 64 }, (size_t[]){ 32 }, 0, NULL, NULL); 
  144.     if (err) 
  145.     { 
  146.         printf("Error: Failed to execute kernel!/n"); 
  147.         return EXIT_FAILURE; 
  148.     } 
  149.      
  150.     // Wait for the command commands to get serviced before reading back results 
  151.     // 
  152.     clFinish(commands); 
  153.      
  154.     // Read back the results from the device to verify the output 
  155.     // 
  156.     err = clEnqueueReadBuffer(commands, memDst, CL_TRUE, 0, sizeof(int) * 512, dest, 0, NULL, NULL );   
  157.     if (err != CL_SUCCESS) 
  158.     { 
  159.         printf("Error: Failed to read output array! %d/n", err); 
  160.         exit(1); 
  161.     } 
  162.      
  163.     // Validate our results 
  164.     // 
  165.     printf("The result is: 0x%.8X/n", dest[0]); 
  166.      
  167.      
  168.     // Shutdown and cleanup 
  169.     // 
  170.     clReleaseMemObject(memOrg); 
  171.     clReleaseMemObject(memDst); 
  172.     clReleaseProgram(program); 
  173.     clReleaseKernel(kernel); 
  174.     clReleaseCommandQueue(commands); 
  175.     clReleaseContext(context); 
  176.      
  177.     return 0; 
  178. int main (int argc, constchar * argv[]) { 
  179.     NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; 
  180.     // insert code here... 
  181.     opencl_execution(); 
  182.     [pool drain]; 
  183.     return 0; 

 

见主机端代码第144行:

  1. err = clEnqueueNDRangeKernel(commands, kernel, 1, NULL, (size_t[]){ 64 }, (size_t[]){ 32 }, 0, NULL, NULL);  

 

这里,我们设定全局工作项个数为64,每个工作组有32个线程,那么这样一来就自然地被划分为两个工作组。如果我们把32改为64,这么一来就变为一个工作组,这样,在一个SM中通过一个共享变量进行通信完全OK,程序就能正常终止。

另外,如果想保持原来的2个Work Group,那么必须通过全局变量进行通信:

  1. __kernel void solve_sum( 
  2.                     __global volatile unsigned buffer[512], 
  3.                     __global unsigned dest[512] 
  4.                     ) 
  5.     __local volatileint flag = 0; 
  6.      
  7.     size_t gid = get_global_id(0); 
  8.      
  9.     if(0 <= gid && gid < 32) 
  10.     { 
  11.         while(buffer[256] != 1); 
  12.         buffer[256] = 0; 
  13.          
  14.         buffer[gid] = 0x1UL; 
  15.         //write_mem_fence(CLK_GLOBAL_MEM_FENCE); 
  16.     } 
  17.     elseif(32 <= gid && gid < 64) 
  18.     { 
  19.         buffer[256] = 1; 
  20.          
  21.         while(buffer[256] != 0); 
  22.         unsigned ret = buffer[31 + 32 - gid]; 
  23.         dest[gid - 32] = ret; 
  24.     } 

 

这里还要注意一点。用于通信的变量都必须加上volatile,否则,OpenCL内核编译器会把对全局变量的第二次访问全都优化为直接从寄存器取数据,从而外部对此变量的改变在当前线程内将无法看见。

posted @ 2012-03-26 17:21  董雨  阅读(426)  评论(0编辑  收藏  举报