Sign in to follow this  
theagentd

SOLVED: Compute shader atomic shared variable problem

Recommended Posts

SOLUTION: Ugh. Don't do 

    minDepth = atomicMin(minDepth, depth);

Just do

    atomicMin(minDepth, depth);

The assignment breaks the atomicity. 

 

 

 

 

I was doing some experiments on tiled deferred shading and ended up with a very strange problem. I tried to compute the minimum and maximum depth of each tile using two shared uints.

shared uint minDepth = 0xFFFFFFFF;
shared uint maxDepth = 0;

The idea was to compute the maximum and minimum depth using atomicMin() and atomicMax() like this:

uint depth = ...;
minDepth = atomicMin(minDepth, depth); //DON'T DO THIS, SEE ABOVE
maxDepth = atomicMax(maxDepth, depth); //DON'T DO THIS, SEE ABOVE
barrier();

However, this is not working correctly. I seem to be getting synchronization problems since the result flickers a lot despite the camera not moving so the depth buffer identical. barrier() has no effect at all on the result. 

 

EnYUpj0.png

 

I fail to see how this could possibly happen. The atomic min and max functions do not seem to work correctly, with some pixels being randomly ignored. For testing, I decided to output the following:

if(minDepth > d){
    result = vec3(5, 0, 0);
}else{
    result = vec3(0);
}

In other words, if the depth of the current pixel is less than the calculated minimum depth of the tile, make it bright red. Here's the horrifying flickering result:

cFMxZCB.png

 

 

What am I doing wrong? How can this possibly happen when I'm using barrier()? Why isn't barrier() nor memoryBarrierShared() working as they should?

 

SOLVED, see above.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Share this post


Link to post
Share on other sites

Check the reference on atomicMin () http://www.opengl.org/sdk/docs/man/

 

I states: performs an atomic comparison of data to the contents of mem, writes the minimum value into mem and returns the original contents of mem from before the comparison occured

 

The second part is the interesting one in your case. You are passing minDepth into the parameter 'mem'. So AFTER the function returns minDepth has already the minimum assigned. But immediately after that you are assigning minDepth the value it has had BEFORE the function call. I guess this is what breaks the synchronized behaviour.

Why are you assigning the value anyway?

 

My guess is leave the assignment away and it will work quite fine. (At least my implementation does and it looks pretty much the same in the parts you showed me here)

 

Hope I could help you there.

 

Unfortunately I'm terrible at reading posts.... at least i did some explaining of the error :)

Edited by Wh0p

Share this post


Link to post
Share on other sites


SOLUTION: Ugh. Don't do 
    minDepth = atomicMin(minDepth, depth);
Just do
    atomicMin(minDepth, depth);
The assignment breaks the atomicity. 

 

I think the problem is that atomicMin() returns the original value of the variable.  If the depth variable is less than minDepth, it sets minDepth to depth, but then returns the original value of minDepth.  The problem isn't the atomic operation but just undoing the minimum comparison.

Share this post


Link to post
Share on other sites

Yes, I missed the part in the specification that said that the variable you pass into the atomic functions (in my case minDepth and maxDepth) are inout, not just in. It all made sense once I got that. >_>

Edited by theagentd

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this