Sign in to follow this  
_Flame1_

OpenCL is very slow comparing to cpu.

Recommended Posts

Hello. I've created my first program with opencl.

__kernel void vector_add_gpu (__global const float* a, __global const float* b, __global float* c, int iNumElements){ // get index into global data array int iGID = get_global_id(0); // bound check (equivalent to the limit on a 'for' loop for standard/serial C code if (iGID < iNumElements) { // add the vector elements c[iGID] = a[iGID] + b[iGID]; }}
I have a quite big buffer with numbers(about 240 mbyte). Opencl spends in 5 time more then a cpu loop. Is it ok or something is wrong? If i have more complicated function(c[iGID] = a[iGID] + sqrt(b[iGID] * b[iGID]);) than difference is much bigger(in 150 times) :)Thank you.

 

P.S. sorry my previous case was wrong i forget to put opencl file to the folder. :)

Share this post


Link to post
Share on other sites
Two possible reasons:
  • Your OpenCL kernel actually runs on the CPU (you didn't tell what implementation you use)
  • Your OpenCL kernel runs on a GPU, but the runtime is absolutely dominated by PCIe transfer latency, not execution speed.
Note that adding together two values on a GPU is a ridiculously small amount of work compared to PCIe bandwidth (or even GPU memory bandwidth). It is therefore not surprising that any measurements you make turn out "kind of strange".

Also, launching a kernel and synchronizing for the result isn't completely "free" either.

Try again with a much more complicated kernel, and you'll likely see a much bigger (50-100 times) difference. Edited by samoth

Share this post


Link to post
Share on other sites
Sorry guys. But opencl is extemely slow comparing to cpu in my case. It's not possible to explain it through just memory bandwidth. Video card is gf 6800 with pcexpress 3.0. And it doesn't matter how much data i calculate cpu is faster anyway. I don't think that kernel runs on cpu since i've chosen CL_DEVICE_TYPE_GPU. Anyway emulation on cpu can't be slower in 5 times comparing to cpu. :) As i said before more complicated function makes difference only bigger.

You can see code here - http://pastebin.com/M3kjrLtM Edited by _Flame_

Share this post


Link to post
Share on other sites
I'm not convinced you are using the GPU how you think you are.

While you are telling OpenCL to launch 0x4000000 threads you are telling it that each work group consists of a single thread, which means you are wasting a vast amount of GPU resources as it will be launching 'preferred_work_group_size_multiple * 0x4000000' warps or wave fronts but only using one thread in each one.

If you run 'clinfo' in a cmd window you should get told the preferred work group size multiple; set the local_ws value to that and you should end up using all the GPU threads instead of just one in each warp/wave front launched.
(For example on my card this value is 64, so a work group of less than 64 is going to waste resources.) Edited by phantom

Share this post


Link to post
Share on other sites

It was a great card in it's time, but as of today it's almost 9 years old and it never supported any GPGPU stuff in hardware to begin with.

 

Maybe he meant 680? That's current top-of-the-line.

 

However, it sounds like OP probably isn't using the thing right -- You can't just spawn a billion threads and expect it to work faster. 

 

OP, you first have to have a reasonable understanding that your problem is suitable for OpenCL or other similar libraries. In most cases, you also can't just throw a best-in-class serial algorithm at a GPU and expect it to speed up. To get real gains you very often have to tailor-make a new algorithm that's suitable for parallel execution, and which can take advantage of the other resources that a modern GPU shares across groups of threads, but not equally with all threads. GPU is an entirely different world of performance expectations and trade-offs.

 

Its possible that CPU algorithms can be faster for some problems, but that the performance disparity gets worse and worse with the size of the algorithm tends to indicate that there's something fundamentally wrong with your approach.

Share this post


Link to post
Share on other sites

OP, you first have to have a reasonable understanding that your problem is suitable for OpenCL or other similar libraries. In most cases, you also can't just throw a best-in-class serial algorithm at a GPU and expect it to speed up. To get real gains you very often have to tailor-make a new algorithm that's suitable for parallel execution, and which can take advantage of the other resources that a modern GPU shares across groups of threads, but not equally with all threads. GPU is an entirely different world of performance expectations and trade-offs.

Excellent point. OpenCL is geared for repeated, heavy, parallel calculations. I am almost certain that you are bus-bound. Sure, you can add two numbers in parallel on the GPU, but you have to put each one there from the CPU (kinda). So, the trivial addition kernel you have, I would never expect any performance improvement from--no matter how large your data is.

Try a more complicated calculation (e.g. for each x, calculate x! or x!! You'll get thousands or billions of multiply operations, depending on type (integers overflow, so huge factorials won't run indefinitely). This will outweigh your data transfer cost by a lot, and you'll definitely see performance gains).

Share this post


Link to post
Share on other sites

Maybe he meant 680?

Yes. :)

While you are telling OpenCL to launch 0x4000000 threads you are telling
it that each work group consists of a single thread, which means you
are wasting a vast amount of GPU resources as it will be launching
'preferred_work_group_size_multiple * 0x4000000' warps or wave fronts
but only using one thread in each one.

It's a real problem. I don't understand what local and global groups are and how i should choose such values for the best performance. I've tried to set local group into different values but i've had an error. Edited by _Flame_

Share this post


Link to post
Share on other sites

It's a real problem. I don't understand what local and global groups are and how i should choose such values for best performance. I've tried to set local group into value different from zero but i've had an error.

You should query your device for CL_DEVICE_MAX_WORK_GROUP_SIZE using clGetDeviceInfo, and use the returned value as the local work group size. Also make sure your global work group size is a multiple of the local group size (which will probably end up being some power of two between 64 and 1024). For best performance you should let the OpenCL compiler work out the optimal work group size by analyzing the kernel, but this should be plenty enough for now.

 

But still, as said above, just adding two numbers in a kernel isn't enough work for the GPU to show any performance advantage. You'll spend most of your time transferring buffers across the PCI-e bus, and your GPU will even be constrained by its own memory bandwidth (which is saying something, as your GTX 680 has a ludicrous global memory bandwidth of 200GB/s)

Share this post


Link to post
Share on other sites

Of course the problem is the memory to pci-e bus then process then back up the pci-e bus and back into memory

 

One thing that I believe your over looking is modern processors can also process 4 float operations at once so that probably accounts for quite a bit of it as well. The cpu will be limited by memory in this scenario as well.

 

Between those two issues you could do it on the cpu before it even transfers down the pci-e bus.

Share this post


Link to post
Share on other sites
I agree. After i've done this.
clGetDeviceInfo(device[0], CL_DEVICE_MAX_WORK_GROUP_SIZE, sizeof(size_t), &work_group, &ret_work_group);local_ws = work_group;global_ws = var_size / work_group;
then cpu was faster anyway.
But after i've changed function on:
c[iGID] = a[iGID] + sqrt(b[iGID] * b[iGID]);
then gpu was faster in about 3.5 times. So it works. Thanks for your help guys. But i'm wondering about global and local groups. If i have that right the local group is amount of gpu processors which run kernel function simultaneously. And global is count of such passes. Is it right? How should i calculate global group properly? Edited by _Flame1_

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this