With the first post in this series, we have had
a basic and formal introduction to OpenCL. We have discussed the need of
parallelism in computation and understood idea of OpenCL with a simple analogy. In
the previous post we created an OpenCL runtime environment using Python. I am
pretty sure, these gave you a good idea of what OpenCL is, and its working! Having the arena set, now we shall try to understand OpenCL in more detail.
Well, I am a huge fan of parallelism
(I believe in theory of Parallel universes as well! ), so let us commence our
advanced discussion on OpenCL with its wonderful ability of exploiting data level
parallelism.
Data Parallelism in OpenCL
Kernel programmes are executed on the device with multiple threads. One should understand how OpenCL manages this. As introduced earlier, total number of Work Items that execute in parallel is represented by N-D domain. In other words, Kernels are executed across a global domain of work items. And these work items are executed in parallel, unlike conventional sequential execution.
Choosing Dimensions
As you see in the figure above(courtesy:AMD), N-D range is represented in 3D. User may specify the dimensions he wishes to use in N D computational domain where N is 1,2 or 3. 1 would represent a vector, 2 would be an Image and 3 would be a volume. By default(in our prev example), it is 1D. Choosing these is left to designer, but choice must be for a better mapping and speed.
//10 million elements in a vector: 1D global_dim[3]={10000000,1,1} //An image: 2D global_dim[3]={1024,1024,1} // Volume 3D global_dim[3]={512,512,512}
For the modularity, user can divide the ND range into separate work groups(with same dimensions as ND range). If you are familiar with CUDA, they are analogous to grids and blocks. Say, you want to process an Image of size 1024*1024, you can have work groups to process blocks of size 128*128! Some note worthy points about work groups are,
In OpenCL, which is very data intensive, memory management is of utmost importance. The programmer must have a very clear picture of memory model, otherwise the program will crash horribly !
With these fundamentals, let us have an overview of what really happens when we run an OpenCL code. Observe the figure below.
After these steps, OpenCL magic starts! A proper command queue is set for the queue of instructions to the OpenCL device. Those who are familiar with processor architecure, it is analogous to scheduling. Inorder of execution is static scheduling, where the instructions are simply executed in order. But in the other case, dynamic scheduling happens where instructions are executed in the order of their dependancies, thus improving the speed by a great extent. This has its own trade-off with hardware requirements , as one can easily see.
- They have their local dimensions.
- They have their Local memory.
- Work Items can be in synch within the group, using barriers. But Global work Items can never be in synch.
OpenCL Memory Model
In OpenCL, which is very data intensive, memory management is of utmost importance. The programmer must have a very clear picture of memory model, otherwise the program will crash horribly !
(Courtesy: AMD) |
GPU(OpenCL Device) will never request data from CPU(host), but only responds to the data requests by host. Buffers with data are sent to computation by the host to the main memory called Global or constant memory. These are accessible to every work item in the context. Please keep in mind that they are NOT synchronised!
The work groups have their local memory, which are accessible to only the work items in the group. With explicit coding by the designer, these can be made in synch to work items of the group. Also, every work item has its private memory for the execution of kernel, specific to the work item.
OpenCL Overview
With these fundamentals, let us have an overview of what really happens when we run an OpenCL code. Observe the figure below.
OpenCL Framework(courtesy: AMD) |
Once you write your code and compile it, the code is built on the host. If your code is bug free, then it is built and Kernel program is obtained. Note that these are specific to a context, and context is in control of the host. Host will now create memory objects like buffers or images to manage the inputs and outputs.
After these steps, OpenCL magic starts! A proper command queue is set for the queue of instructions to the OpenCL device. Those who are familiar with processor architecure, it is analogous to scheduling. Inorder of execution is static scheduling, where the instructions are simply executed in order. But in the other case, dynamic scheduling happens where instructions are executed in the order of their dependancies, thus improving the speed by a great extent. This has its own trade-off with hardware requirements , as one can easily see.
AMD boasts that its OpenCL can support multicore AMD CPUs as OpenCL devices as well. This piece of code depicts the command queue creation.
cl_command_queue gpu_q,cpu_q; gpu_q= clCreateCommandQueue(cntxt,device_gpu,0,&err); //'cntxt' represents the created context and device_gpu represents device id of gpu cpu_q= clCreateCommandQueue(cntxt,device_cpu,0,&err); //device_gpu represents device id of cpu
Above framework gives a very clear overview of OpenCL execution. Now that you have a good idea of OpenCL, go ahead and start your projects with a single motto, "Think Parallel !". I will get back with a next post about Kernel execution and some more OpenCL programs. If you have any doubts, comment box is right below!
CHEERS!
CHEERS!