• Advertisement
Sign in to follow this  

Understanding and implementing "Immediate Stream Compaction"

This topic is 1374 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

I have quite some problems implementing immediate stream compaction using compute shader and I'm sure you can help me out.

 

First of all I would like to explain what I'm heading for. I'm sorry if it is a long read.
 

Target :
I need immediate stream compaction to write only useful data to a Buffer because I use a Linear bounding volume hierarchy which writes a Null where no further processing is required.If I write directly into a compacted output then I can save memory and take use of it in form of a deeper hierarchy.

 

 Steps I've already done:

 

Labeling every Thread which has useful data with a 1 and every thread which does not require further processing with a 0

 

 

StreamCompaction_Labeling.jpg

 

- I write all Labels to a  temporary RWStructuredBuffer  and perform a scan in shared memory

 

 

StreamCompaction_SharedMemoryScann.jpg

 

Steps I can't  implement:

 

- I Scanned Every Block in TGSM but now I need for example Take The "maximum count of Block 0" and add it to the whole "Block 1"

so the TGSM block is added with the  maximum count of it's left neighbour

 

StreamCompaction_Paddingby%20PreviousCou

 

 

-My problem is that I cannot access The Maximum count of block0 for block1 because it is TGSM and not readable between other blocks
 

 

I tried to write it to a temporary RWStructuredbuffer and acces it from there also I tried DeviceMemoryBarrier() or  AllMemoryBarrier() and [Globally Coherent] to make the written TGSM result in the RWStructuredBuffer visible to every thread.


My Code so far is like this :
 

const	int index = threadId;

if(threadId==0)
{
	_TestBuffer[0] = 0;
}

					
if(GIndex!=0)
{
	Label = NodeBuffer[index].Label==0;//Label every usefull Thread 
}
		 

_TestBuffer[index] = Label;
shared_StreamCompaction[GIndex] = 0 ;  //// WRITE WITH GIndex 
GroupMemoryBarrierWithGroupSync();

uint temp = 0;
for(uint i = 0; i<=GIndex;i++) //0-Blocksize(1024)
{
	//Scann in Shared Memory
	uint result =   _TestBuffer[index-min(i,max(0,GIndex))]+ shared_StreamCompaction[GIndex];
	shared_StreamCompaction[GIndex] = result;
}   
GroupMemoryBarrierWithGroupSync();
  
  
//Write Shared to Global  cannout acces any blocks but the ones shared within a Thread Group
_TestBuffer[index] = shared_StreamCompaction[GIndex];

// Acces Somehow neighbouring Blocks ?? 

 

 

 

Thank you for your time! I hope you can help me solve this problem so I can do deeper hierarchies:) Thanks!

Paper Reference :

Improving SIMD Efficiency for Parallel Monte Carlo Light Transport on the GPU by the awesome  Dietger van Antwerpen

- GPU Gems 3 Chapter 39.Parallel Prefix Sum (Scan) with Cuda

Edited by Gfx_Christopher_Schiefer

Share this post


Link to post
Share on other sites
Advertisement

Comparing the last diagram and your code, I don't see the "auxiliary array" of "block sums", nor the separate multithreaded processing steps that write it from collected per-block data (no locking, every thread writes only to one block's specific location of the array) and that read it to update the blocks (no locking, it's read only).

Share this post


Link to post
Share on other sites

had to read your short sentence multiple times but gotcha now and yes you couldn't see it because this is the part I could not do.

 

 

 


(no locking, every thread writes only to one block's specific location of the array)

This helps a lot.I saw the array as something completely different initially.

I shout out when I solved it.Thanks

 

 

Edit.:  Thanks again it works.

Edited by Gfx_Christopher_Schiefer

Share this post


Link to post
Share on other sites

Here the little test code that works. I can read now From every ThreadGroup and store it somewhere 

 

 

Scanning the block sums and adding them to i+1 is now pretty straightforward smile.png 

Pretty nasty manual coding to test it though but the point was to see if I can read data from other threadgroups which is stored in TGSM.

Give this guy a cookie except for the very short sentence .You gave me a bit of headache biggrin.png.

This is what you meant right? with "(no locking, every thread writes only to one block's specific location of the array)"



if(index==1)
{
_TestBuffer[1] = shared_StreamCompaction[(CompactBlockSize-1)];//ThreadGroup Maxima
}

if(index==1025)
{
_TestBuffer[2] = shared_StreamCompaction[(CompactBlockSize-1)];//ThreadGroup Maxima

}

if(index==2049)
{
_TestBuffer[3] = shared_StreamCompaction[(CompactBlockSize-1)];//ThreadGroup Maxima

}
Edited by Gfx_Christopher_Schiefer

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement