Jump to content

  • Log In with Google      Sign In   
  • Create Account


SOLVED: Compute shader atomic shared variable problem

  • You cannot reply to this topic
3 replies to this topic

#1 theagentd   Members   -  Reputation: 539

Like
0Likes
Like

Posted 26 June 2014 - 09:05 AM

SOLUTION: Ugh. Don't do 

    minDepth = atomicMin(minDepth, depth);

Just do

    atomicMin(minDepth, depth);

The assignment breaks the atomicity. 

 

 

 

 

I was doing some experiments on tiled deferred shading and ended up with a very strange problem. I tried to compute the minimum and maximum depth of each tile using two shared uints.

shared uint minDepth = 0xFFFFFFFF;
shared uint maxDepth = 0;

The idea was to compute the maximum and minimum depth using atomicMin() and atomicMax() like this:

uint depth = ...;
minDepth = atomicMin(minDepth, depth); //DON'T DO THIS, SEE ABOVE
maxDepth = atomicMax(maxDepth, depth); //DON'T DO THIS, SEE ABOVE
barrier();

However, this is not working correctly. I seem to be getting synchronization problems since the result flickers a lot despite the camera not moving so the depth buffer identical. barrier() has no effect at all on the result. 

 

EnYUpj0.png

 

I fail to see how this could possibly happen. The atomic min and max functions do not seem to work correctly, with some pixels being randomly ignored. For testing, I decided to output the following:

if(minDepth > d){
    result = vec3(5, 0, 0);
}else{
    result = vec3(0);
}

In other words, if the depth of the current pixel is less than the calculated minimum depth of the tile, make it bright red. Here's the horrifying flickering result:

cFMxZCB.png

 

 

What am I doing wrong? How can this possibly happen when I'm using barrier()? Why isn't barrier() nor memoryBarrierShared() working as they should?

 

SOLVED, see above.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



Sponsor:

#2 Wh0p   Members   -  Reputation: 292

Like
1Likes
Like

Posted 26 June 2014 - 09:29 AM

Check the reference on atomicMin () http://www.opengl.org/sdk/docs/man/

 

I states: performs an atomic comparison of data to the contents of mem, writes the minimum value into mem and returns the original contents of mem from before the comparison occured

 

The second part is the interesting one in your case. You are passing minDepth into the parameter 'mem'. So AFTER the function returns minDepth has already the minimum assigned. But immediately after that you are assigning minDepth the value it has had BEFORE the function call. I guess this is what breaks the synchronized behaviour.

Why are you assigning the value anyway?

 

My guess is leave the assignment away and it will work quite fine. (At least my implementation does and it looks pretty much the same in the parts you showed me here)

 

Hope I could help you there.

 

Unfortunately I'm terrible at reading posts.... at least i did some explaining of the error :)


Edited by Wh0p, 26 June 2014 - 09:31 AM.


#3 Glass_Knife   Moderators   -  Reputation: 3774

Like
1Likes
Like

Posted 26 June 2014 - 09:36 AM


SOLUTION: Ugh. Don't do 
    minDepth = atomicMin(minDepth, depth);
Just do
    atomicMin(minDepth, depth);
The assignment breaks the atomicity. 

 

I think the problem is that atomicMin() returns the original value of the variable.  If the depth variable is less than minDepth, it sets minDepth to depth, but then returns the original value of minDepth.  The problem isn't the atomic operation but just undoing the minimum comparison.


I think, therefore I am. I think? - "George Carlin"
Indie Game Programming

#4 theagentd   Members   -  Reputation: 539

Like
0Likes
Like

Posted 26 June 2014 - 06:44 PM

Yes, I missed the part in the specification that said that the variable you pass into the atomic functions (in my case minDepth and maxDepth) are inout, not just in. It all made sense once I got that. >_>


Edited by theagentd, 26 June 2014 - 06:44 PM.






PARTNERS