• Advertisement
Sign in to follow this  

Vertex color interpolation weirdness (or is my hardware trolling me ?)

This topic is 706 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all !

 

I’m facing a weird graphics behavior and I don’t know how this can be possible or if I’m reaching some hardware limitation I’m not aware about. So before going crazy, I’m asking for some advices / hints / clues or confirmation that there’s nothing we can do about it…

 

Here is the main issue I’m addressing: color banding. In our project, sky atmosphere layered is rendered as big dome, and it outputs color banding. In the sky shader, we don’t use texture for coloring but rather a blend between three colors. Weights for this blending is determined by vertex color attribute set in the dome mesh. It seems that interpolating this vertex color from VS to PS causes the banding.

 

Here is the experiments I made while trying to understand what’s going on – Note that our render target format is R11G11B10_FLOAT :

  1. Outputting interpolated vertex color: at the end of vertex shader and pixel shader, I overwrite any computed color values to display only vertex color given as input by the mesh
OUT v2f VS()
{
                [……]
                OUT.Color = IN.Color;
}

PSOUT PS(IN v2f)
{
                […..]
                PSout.Color = IN.color;
                return PSout;
}

Here is what I get from RenderDoc (when looking at the top of the dome):

[attachment=31897:vc_interpolation.jpg]

 

This image shows only the blue channel, with a scaled range so we could better see the banding. In green is one triangle of the dome mesh, we can see that banding has clearly the same shape as the dome.

Top vertex color for blue is 0.98431, and 0.94118 for bottom vertices (VS output and input are exactly the same).

Band values from top to bottom: 0.96875, 0.95313, and 0.9375. Difference between bands are ~0.0156, difference between vertex colors and closest band is also ~0.0156 ( ~= 4/255) .

 

So this first results raises  a few questions:

  • How can interpolation results in only 3 different values over 200+ pixels ? Isn’t interpolation supposed to be performed with f32 ?
  • Why such big steps in values although we output into an HDR format ?
  • Why first and last band doesn’t match vertex color input ?

 

2. Forcing a fix value at the end of VS shader:

OUT v2f VS()
{
                [……]
                OUT.Color = IN.Color;
                OUT.Color.b = 1.0f;
}

PSOUT PS(IN v2f)
{
                […..]
                PSout.Color = IN.color;
                return PSout;
}

And here it is (again, only blue channel, scaled range) :

[attachment=31898:1.0f_interpolation.jpg]

 

Interesting here: we only get two different values, 1.0 or 0.99219 (which makes a difference of ~2/255) in a noisy way, with no distinguishable pattern except the mesh shape…

So:

  • How can interpolating 1.0 value between two vertices can be different than 1.0 ? I know about fp precision issues but ~2/255 difference is suspicious.
  • WTF ?? O__O

I add following code in PS to make sure it doesn’t come from quantization errors when writing to target:

if(IN.Color.r == 1.0f)
{
                fragout = float4(0.0f,1.0f,0.0f, 1.0f);
}
else
{
                fragout = float4(1.0f,0.0f,0.0f, 1.0f);
}
PSout.Color = fragout;

And it results in the same noisy image in red and green…

 

3. Outputting to a 64bit float format:

Just makes more band with smaller steps but still visible…. And same issue when trying to output constant value.

 

I also played with other keywords like precise, vertex output semantics or interpolation options but no success for now :(.

 

So again, if anyone here has some insigths on this, or other relevant tests to make... I’m kinda opened to anything at that point…

 

Thanks !

 

 

 

 

Share this post


Link to post
Share on other sites
Advertisement

The banding you are seeing is expected with the FP10 format. The format is 6 bits mantissa and 4 bit exponent. So from 0.5 to 1.0 you have 64 unique codes (2^6). 1 / 0.0156 == 64.

 

The banding is a quantisation when storing to the FP10 blue format. 

 

A solution is is to add noise or dither at export time to mask the banding artifacts.

Here is a great presentation on this.
http://loopit.dk/banding_in_games.pdf

Share this post


Link to post
Share on other sites

Thanks for the explanation, it's really helpful :).

However it doesn't explain every questions, I still don't understand the result I get with my green/red test. This test is supposed to remove any dependency regarding RT format and it shows something's wrong with VC interpolation...

Share this post


Link to post
Share on other sites

Alright !

Here is the answer:

 

  • Indeed banding is due to my RT format: as AliasBlinman taught me, by using R11G11B10_FLOAT we lose a lot of precision on values close to 1. Using R8G8B8A8_UNORM actually outputs no banding
  • It looks like an interpolated value can have some fp-error by the order of 10e-6. Testing an interpolated value with == is a bad idea. Using abs(X – ref) < epsilon for my red/green test outputs shows noise for epsilon = 10e-7.
  • Renderdoc shows you displayable values and it looks like its strategy is to floor values… So my 1.0f-ish interpolated value was snap to the smallest  R11G11B10_FLOAT acceptable value, which is 63/64 in case of the blue channel

 

Now I’ll stop chasing at windmills and start implementing a proper dithering

 

Thanks all for your help ! :D

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement