Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Compute Shader: ThreadGroupSize and Performance


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
2 replies to this topic

#1 me_12   Members   -  Reputation: 196

Like
0Likes
Like

Posted 21 December 2013 - 05:44 PM

Hi Guys,

 

I have a few questions regarding ThreadGroupSize and performance.

 

1. No matter how many threads are in one thread group, it will always be executed by one SIMD / SMX (split in wavefronts / warps)? So lets say if I only need 1024 threads to process something and I start this in one group only, I have wasted performance since I could split it in smaller parts and have multiple SIMDs / SMXs working on it?

 

2. In case the above assumption is correct: If I dispatch only one thread group, will the other SIMD / SMX are blocked? Or do they work on other stuff like pixel processing, vector operations etc.? In other words... do all of them have to work on the same stuff or will the mix different things to keep them occupied?

 

3. Someone was writing this: 

 

Maintaining performance and correctness across devices becomes harder:
 
- Code hardwired to 32 threads per warp when run on AMD hardware 64 threads will waste execution resources
- Code hardwired to 64 threads per warp when run on Nvidia hardware can lead to races and affects the local memory budget

 

 

The first Statement makes perfectly sense. But the second... well I don't get it. Local memory is the main memory on the graphics card I guess not the thread shared memory? And could anyone explain what exactly happens that these races occur?

 

Thanks already!

 



Sponsor:

#2 MJP   Moderators   -  Reputation: 11786

Like
3Likes
Like

Posted 21 December 2013 - 07:06 PM

1. While this is generally true for Nvidia and AMD hardware, I'm pretty sure it's an implementation detail and not something mandated by the API. The API just requires that threads within a thread group can be synchronized, and that they can share thread group shared memory. Consequently the implementation details may be very different on Intel hardware, or in software implementations (like WARP).

However if we're talking about AMD and Nvidia hardware, then it is true that you can achieve better performance by using smaller thread groups since it will allow the shader to execute on more hardware units. But this may not always hold true on all AMD/Nvidia hardware, since the available resources will vary depending on the exact GPU that you're running on. Typically you'll want to have at least 2 warps/wavefronts per thread group, since this will give the hardware another set of threads to switch to in order to hide latency from memory access. You'll also want to always make sure that your thread group size is a multiple of the warp or wavefront size, otherwise you'll have threads that execute but are masked out.

2. This is another implementation detail, but AMD and Nvidia hardware are capable of having multiple Dispatch/Draw calls in flight simultaneously. However if you have one Dispatch or Draw that needs to read the results of a previous Dispatch or Draw, the hardware will have to insert a sync point so that it can wait for the first Draw/Dispatch to completely finish before allowing the second Draw/Dispatch to execute.

 

3. I couldn't say for sure without more context, but it sounds like that those statements are about writing code that makes assumptions about the number of threads in a warp or wavefront in order to optimize the code. You'll see this happen pretty often in CUDA code, where the author will assume that the 32 neighboring threads in a warp will execute instruction atomically with respect to each other, so they can avoid the use of expensive atomics and/or sync instructions. If you stick to using atomics and syncs when necessary, then you can safely use a multiple of 64 for your thread group size and you won't waste any resources on either Nvidia or AMD hardware.


Edited by MJP, 21 December 2013 - 07:07 PM.


#3 me_12   Members   -  Reputation: 196

Like
0Likes
Like

Posted 21 December 2013 - 10:21 PM

Thank you for your answer!

 

3. I couldn't say for sure without more context...

 

This is where I got that quote: http://cvg.ethz.ch/teaching/2011spring/gpgpu/GPU-Optimization.pdf (Page 15). There isn't much more info though. sad.png

 

With my current implementation for my particle system, I have noticed that my performance drops, if I increase my thread group size from 64 to 128 on my nvidia card (had 1 million particles active -> 1 million threads). And I am not using shared memory. All I do is consume() a particle from one buffer, process the particle and append() it to the other buffer. These should be atomic operations. So there must be another reason why it might be bad to have bigger thread group sizes...

 

Also, I would like to write a few words about why it is critical to use the correct amounts of threads per thread group for my bachelor thesis. For that I need some reason why it might be bad to have too many threads per thread group. So any theoretical reason would help. (I could not find anything on the web so far)

 

While we are at it... there is this GTC presentation: http://www.nvidia.com/content/GTC/documents/1015_GTC09.pdf. On page 44 it says something about thread group size heristics.

 

 

# of thread groups > # of multiprocessors

 

I guess this is only true if you actually have enough work to do. So if you only need one thread group at a size of 512 you might want to lower it to 64 or even 32 and dispatch more groups. But it is not advised to start a few more thread groups if you only need 32 threads and your group size is already 32 just to have the other multiprocessor occupied. If i am correct? (Just asking because you have to be super precise when writing papers...)

 

 

Amount of allocated shared memory per thread group should be at most half the total shared memory per multiprocessor

 

Why is that? So that the multiprocessor can already load data for the next thread group to be processed? 

 

 

Occupancy is:

- a metric to determine how effectively the hardware is kept busy
- the ratio of the number of active warps per multiprocessor to the maximum number of possible active warps

 

This means having more warps in the queue and ready to be executed while the processor is still doing work on other warps, so that in case of latency it can switch out the warps and work on the warps in the queue instead?


Edited by me_12, 21 December 2013 - 10:28 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS