This topic is 2154 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hello all, I recently wrote a shader that's only purpose was to fade out an all white texture passed to it, this texture is saved to a render target and then passed back into the shader. This works unless I set the fade value greater than .95f. When it's higher than .95f the color doesn't fade to black but to .1921f and stops. I initially tried the following pixel shader:

float4 PixelShaderFunction(float4 cin : COLOR0, float2 coords: TEXCOORD0) : COLOR0 {   cin = tex2D(inputsampler, coords)*.99;   return cin; }

with this the color does indeed fade, but not completely, the same thing happens when I use:

float4 PixelShaderFunction(float4 cin : COLOR0, float2 coords: TEXCOORD0) : COLOR0 {   cin = lerp(float4(0, 0, 0, 1), tex2D(inputsampler, coords), float4(.99f, .99f, .99f, .99f));   return cin; }

HOWEVER! When I use the following function, the color does fade completely as expected:

float4 PixelShaderFunction(float4 cin : COLOR0, float2 coords: TEXCOORD0) : COLOR0 {   cin = floor((tex2D(inputsampler, coords) * .99f)*255.0f);   return cin/255.0f; }

Why aren't the first two functions working as expected?! I can't see it being floating point precision, so what gives? Just in case it's something in my drawing function, here it is:

protected override void Draw(GameTime gameTime) {     graphics.GraphicsDevice.SetRenderTarget(rtA);     graphics.GraphicsDevice.Clear(Color.Black);     spriteBatch.Begin(SpriteSortMode.Texture, BlendState.Opaque, null, null, null, Diffusion);     spriteBatch.Draw(Feedback, Vector2.Zero, Color.Black);     spriteBatch.End();               graphics.GraphicsDevice.SetRenderTarget(rtB);     graphics.GraphicsDevice.Clear(Color.Black);     spriteBatch.Begin();     spriteBatch.Draw(rtA, Vector2.Zero, Color.White);     spriteBatch.End();     Feedback = rtB;     graphics.GraphicsDevice.SetRenderTarget(null);     graphics.GraphicsDevice.Clear(Color.Black);     spriteBatch.Begin();     spriteBatch.Draw(rtB, Vector2.Zero, Color.White);     spriteBatch.End();     base.Draw(gameTime); }

Share on other sites
In the first example, you multiply with a double precision value. You can try adding an f in the end in case that the compiler don't convert it.

I suspect that the input to the lerp function is in the wrong order in the second example since you pass a constant as variable s.
http://msdn.microsof...v=vs.85%29.aspx

I don't see why you multiply with 0.99.

Share on other sites
In the first example, adding f to the end does not help, this really doesn't seem to be an issue with precision from what I can tell.

The second example, I am lerping by a constant because the sampled color is the result of the last lerp operation.

I use .99 in all these examples because I want to remove 1% of the color value each pass, it's an extreme case but it is unusual that it doesn't work and a 4% lower value does.

Share on other sites
What color depth is your render targets using?

Share on other sites
Hello Dawoodoz. You pretty much hit the nail on the head from what I found out last night, if I switch the surface format to use Vector4 then it behaves as expected in all cases, before that I was only using Color which provides 8 bits. When the color would reach .1921f (49) and get passed to the shader, the shader would multiply it by .99 giving us .1901 (48.495). From what I can gather although I can't find an official answer, is that something is causing that value to be consistently rounded upwards so when it is converted back into the Color Format, once it gets to 48.495 it gets rounded back up to 49 and will never return a lower value after reaching that point.

Share on other sites
If one runs the following code in a pixel shader:

float4 PixelShaderFunction(float4 cin : Color0, float2 coords : TEXCOORD0) : COLOR0 { cin = float4(.190235f, .190235f, .190235f, 1); return cin; }

you can see this rounding up behavior. 255.0f/49.0f = .192156 * .99f = .190235 * 255.0f = 48.51f. From previous experience in programming I'm used to the default behavior to be rounding down. In this case it seems the default behavior is to round up instead. Is this behavior documented anywhere?