Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Banding Woes of DOOOM - ssao and depth


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
9 replies to this topic

#1 bwhiting   Members   -  Reputation: 789

Like
0Likes
Like

Posted 08 July 2012 - 05:17 AM

Ok this is bugging me massively and I am sure its an easy fix!!

1st some screenies:

SSAO 1:
banding1.jpg
SSAO 2 (problem highlighted):
banding2.jpg
SSAO 3:
banding3.jpg
Banding in the depth buffer:
banding_depth.jpg

Right I am pretty sure all my issues are from the banding seen in the depth buffer, while I understand when rendering it out to the screen there will be some banding but not like this.. it seems as if the depth, as it gets further, has blips! i.e. if I sample the colours in photoshop the value decreases but at the edge of the bands it jumps up!?!?!

WHYYYYYYYYYYYYYYYYY?!!!!

I am encoding a very short depth (1- 100ish) instead of 1000 for the far plane, and am storing it across all for channels for precision:

The depth buffer has been linear-ified, and have checked the maths on paper and in pure Actionscript
and the values are correct, i.e. a depth of 50 is 0.5 in the depth buffer and once its gone through encoding and decoding comes back exactly the same.... this applies to every values that I tried, but something is clearly going wrong on the GPU side of things.

So I was wondering if someone has run into this before or from the pictures can recognize the issue. I don't think it is a precision problem as the depth is so short and encoded across 4 channels and in my tests that gives a pretty high accuracy.

Any ideas?? If anyone wants I can post the code of the encoding and decoding process too.

Thanks

B

Sponsor:

#2 Hodgman   Moderators   -  Reputation: 31130

Like
0Likes
Like

Posted 08 July 2012 - 05:29 AM

if I sample the colours in photoshop the value decreases but at the edge of the bands it jumps up!?!?!
...
am storing it across all for channels for precision

You mean that you have a float depth value, then you split it into e.g. a float4 for writing into a texture (which you later re-assemble back into a float)?

This is actually a lot trickier than it seems, and if you mess it up, then you end up with the funny banding issues you've noticed with photoshop. Can you post the code for your float->float4 and float4->float packing/unpacking routines? When I was looking for these a while ago on the internet, most of the examples I found online were actually buggy (they looked right at a glance, but produced very subtle artefacts like in your image at the boundary conditions).

Edited by Hodgman, 08 July 2012 - 05:31 AM.


#3 bwhiting   Members   -  Reputation: 789

Like
0Likes
Like

Posted 08 July 2012 - 06:02 AM

thanks for the lightning fast reply, yeah I found the same but after a fair bit of fiddling I was able to get one that seemed to work.

encoding method::
//constants = [(255/256)*1, (255/256)*255, (255/256)*(255*255), (255/256)*(255*255*255)];

mul(dest, constants, float);
frc(dest, dest);

decoding method::
//constants = [(256/255)/1, (256/255)/255, (256/255)/(255*255), (256/255)/(255*255*255)])
dp4(dest, rgba, constants);

Simple but seems to produce very accurate results when I am testing it on the CPU, very hard to tell if its the GPU where this issue lies of if the problem is fundamental.

Just did a test with un-encoded depths (lower accuracy) but it still has the same damned issue!!! So frustrating as it looks so close to how I want it to.

Any other ideas what it could be?

#4 bwhiting   Members   -  Reputation: 789

Like
1Likes
Like

Posted 08 July 2012 - 06:45 AM

had a slight development, i was comparing the neighboring pixels with a "difference > 0" check, but i replaced it with a "difference > threshold" check.

so that reduced the banding massively but highlighted it in the shadows...but the interesting thing is that the threshold value has to be MUCH bigger than I thought it should be!!!

i.e. 0.02
when I should be able to set it to something like 0.000002

which leads me to believe the precision is being lost somehow after all Posted Image

updated screenshot with threshold:
banding4.jpg

#5 Hodgman   Moderators   -  Reputation: 31130

Like
1Likes
Like

Posted 08 July 2012 - 08:51 AM

Simple but seems to produce very accurate results when I am testing it on the CPU, very hard to tell if its the GPU where this issue lies of if the problem is fundamental

N.B. at the end of the GPU version, after your pixel shader runs, the hardware's ROP stage takes your float4 output and quantises it down to a byte4 (assuming a 4-channel 8-bit texture format). If emulating your code on the CPU to test correctness, make sure that you perform this same float->byte->float quantisation to emulate the value being written to a texture and read back out again.

I'm not sure if the quantisation process is actually documented anywhere... I guess it would be byteValue = round(input*255), but it might also use floor/truncation... maybe test both approaches in your CPU test and see if it affects your results?

Edited by Hodgman, 08 July 2012 - 08:54 AM.


#6 bwhiting   Members   -  Reputation: 789

Like
2Likes
Like

Posted 08 July 2012 - 12:26 PM

Hey Hodgman,

Yeah in all my tests I made sure I took that into account, from what I could gather it was a floored (value*255).

I am beginning to wonder if it is a swizzling (not sure if that's the term) issue, so am going to have a quick play with that see if it helps. Will report back if it helps.

#7 MJP   Moderators   -  Reputation: 11590

Like
1Likes
Like

Posted 08 July 2012 - 01:22 PM

I'm not sure if the quantisation process is actually documented anywhere... I guess it would be byteValue = round(input*255), but it might also use floor/truncation... maybe test both approaches in your CPU test and see if it affects your results?


The D3D10 specs specify the following for FLOAT->UNORM conversion, so I'd imagine that this is what most hardware will use:

Let c represent the starting value.
  • If c is NaN, the result is 0.
  • If c > 1.0f, including INF, it is clamped to 1.0f.
  • If c < 0.0f, including -INF, it is clamped to 0.0f.
  • Convert from float scale to integer scale: c = c * (2n-1).
  • Convert to integer.
  • c = c + 0.5f.
  • The decimal fraction is dropped, and the remaining floating point (integral) value is converted directly to an integer.
This conversion is permitted a tolerance of D3D10_FLOAT32_TO_INTEGER_TOLERANCE_IN_ULP Unit-Last-Place (on the integer side). This means that after converting from float to integer scale, any value within D3D10_FLOAT32_TO_INTEGER_TOLERANCE_IN_ULP Unit-Last-Place of a representable target format value is permitted to map to that value. The additional Data Invertability requirement ensures that the conversion is nondecreasing across the range and all output values are attainable.

#8 bwhiting   Members   -  Reputation: 789

Like
0Likes
Like

Posted 08 July 2012 - 01:45 PM

tried another encoding algorithm and boom headshot, its fixed, its shame the one I had doesn't work as it is less operations but a success is a success right.

thanks for the detailed info MJP!

if anyone wants to have a play with the method I posted to fix it then be my guest, just be sure to post back if you can find out what is wrong with it :(

cheers for the responses guys, I'll be sure so post a link to the demo in here once I have improved it a little!

#9 phil_t   Crossbones+   -  Reputation: 3948

Like
0Likes
Like

Posted 08 July 2012 - 04:28 PM

What kind of filtering are you using when sampling from your 4-component depth buffer? The only thing that will work properly when you split a single value across four components is point sampling, since linear interpolation between values will not produce the correct result.

#10 bwhiting   Members   -  Reputation: 789

Like
0Likes
Like

Posted 09 July 2012 - 01:06 AM

Hi Phil, I made sure to use nearest neighbor when sampling but I tried linear just in case it helped but no joy.
With the current encoder it seems to work fine either way, although I am sure there will be some loss, its no where near as perceptible as the banding I had before.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS