Why is this 128Bit Color Format being converted to 32Bit (HLSL/SLIMDX-9)

Started by
16 comments, last by L. Spiro 11 years, 6 months ago
Note: Sorry I know I have been posting tons of newbie questions on here lately but I really don't understand this problem.

I am trying to write an HLSL pixel shader for a project I am working on. Basically want I want to do is if a texture has a pixel with a float value of 0.52 (on scale of 0-255 is 132.6) I want to output 133 60% of the time and output 132 40% of the time. Write now I am just trying to output the fractional remainder of the RGB value (i.e. the chance to bump the pixel up) however I always get a value of zero) I think this is because the colors are getting quantized to a 0-255 scale before they reach the shader but I don't know why this would be because I am useing the A32B32G32R32f format which should be able to store plenty of information about the colors. Here is my very simple shader code. I am using SlimDX (DX9) if that matters.

[source lang="cpp"]sampler2D Tex0 : register(s0);
float4 DitherByChance(float2 coords : TEXCOORD0) : COLOR
{
float4 newColor = float4(0, 0, 0, 0); // The pixel color to return
double4 oldColor = tex2D(Tex0, coords);
double scale = 0.0039215686274509803921568627451; // 1 / 255

double rPercentChance = frac(oldColor.r / scale); //Chance to round red channel up
double gPercentChance = frac(oldColor.g / scale); //Chance to round green channel up
double bPercentChance = frac(oldColor.b / scale); //Chance to round blue channel up

newColor.r = rPercentChance;
newColor.g = gPercentChance;
newColor.b = bPercentChance;

newColor.a = 1;

return newColor;
}

technique DitherViaChance
{
pass Pass1
{
PixelShader = compile ps_2_0 DitherByChance();
}
}[/source]

Here is the relevant VB code
[source lang="vb"]
Public Overrides Sub RenderScene()
'ditherBC.Technique = ditherBC.GetTechnique("DitherViaChance")

Me.device.BeginScene()
spriteRender.Begin(SpriteFlags.None)
ditherBC.Begin()
ditherBC.BeginPass(0)
'NOTE tex IS CREATED WITH createCharacterTexture()'
spriteRender.Draw(tex, New Rectangle(0, 0, 50, 50), New Color4(1, 1, 1))
spriteRender.End()
ditherBC.EndPass()
ditherBC.End()
Me.device.EndScene()
End Sub

Public Function createCharacterTexture() As Texture
Dim RtsHelper As RenderToSurface = New RenderToSurface(device, 100, 100, Format.A32B32G32R32F)
Dim texture As Texture = New Texture(device, 100, 100, 1, Usage.RenderTarget, Format.A32B32G32R32F, Pool.Default)

Dim sloanFont As Font = New Font(device, 98, 98, FontWeight.Normal, 1, False, CharacterSet.Default, Precision.Default,
FontQuality.ClearTypeNatural, PitchAndFamily.Default, "Sloan")


RtsHelper.BeginScene(texture.GetSurfaceLevel(0), New Viewport(0, 0, 100, 100))



sloanFont.DrawString(Nothing, "A", 1, 1, New SlimDX.Color4())
RtsHelper.EndScene(Filter.None)

Font.Dispose()
RtsHelper.Dispose()

Return texture
End Function


[/source]
Advertisement
Do I understand correctly that you are rendering the sloanfont.DrawString to the 128-bit rendertarget?

Maybe your source material is only 8-bit, ie. the font texture and the color?

Otherwise, can you confirm that your shader is outputting values correctly? I mean, I don't understand exactly the syntax you use for rendering to a rendertarget since I'm not a VB pro.

Cheers!
Yes that is correct. I create at RenderToSurface object with a 128bit format so that I can render some text directly to a surface. I then I create a 128bit texture with one surface to act as the rendertarget. I am not sure if there is such a thing as diffrent bit formats for the Color class. I think if I create a color of 0.52 , 0.52, 0.52 it does not get converted until it get rendered onto the texture but I don't know. "Otherwise, can you confirm that your shader is outputting values correctly?" Yes If I manual but in 0.52 instead of oldColor.r (should be 0.52) I get the expected result.
The way DrawString() works is that it uses a texture to hold an image of each character in the font. It's that texture which is almost certainly 8-bit and causing you problems.

You want to draw something other than text - a triangle with a different colour at each vertex should do the trick.
Firstly, don’t use magic numbers such as “0.0039215686274509803921568627451”.
Use:
float scale = 1.0 / 255.0;


Secondly are you aware that DitherByChance() returns values from the range of 0-255?
Reading from the texture in the shader gives you a value between 0 and 1.
Then you multiply that by 255.
For example, if the texture gave you 0.52 for red as per your example, you then do the following to it:
double rPercentChance = frac(oldColor.r / scale);
Which evaluates to the following:
double rPercentChance = frac(0.52 / 0.0039215686274509803921568627451);
Which evaluates to:
double rPercentChance = frac(132.59999999999999999999999999993);

Is this what you were expecting?
If so, why would you not just do:
double rPercentChance = frac(oldColor.r * 255.0);
instead? Multiplication is faster than division.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid


double rPercentChance = frac(132.59999999999999999999999999993);
[/quote]

As far as I know that this is what he is expecting to get, but for some reason he doesn't get that result.

Cheers!
After re-reading his original post I think that is what he expects too, but that is a fairly crazy way to get it.
Just multiply by 255.0. Divisions are slow.
It should be:
sampler2D Tex0 : register(s0);
float4 DitherByChance(float2 coords : TEXCOORD0) : COLOR
{
float4 newColor = float4(0, 0, 0, 1); // The pixel color to return
double4 oldColor = tex2D(Tex0, coords);
double scale = 255.0; // 1 / 255

newColor.r = frac(oldColor.r * scale); //Chance to round red channel up
newColor.g = frac(oldColor.g * scale); //Chance to round green channel up
newColor.b = frac(oldColor.b * scale); //Chance to round blue channel up

//newColor.rgb = frac( oldColor.rgb * scale ); // This would be a lot faster…


return newColor;
}


Why not return the unaltered texture values?
return oldColor;
See what happens.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

From L. Spiro:
After re-reading his original post I think that is what he expects too, but that is a fairly crazy way to get it.
Just multiply by 255.0. Divisions are slow. [/quote]

Yeah thats what I was oinganlly doing but in my quest to try and figure out what was going on I changed it to see if it made any diffrence. (not sure why I thought it would)

From L. Spiro:
Why not return the unaltered texture values?
return oldColor;See what happens.
[/quote]

Yes I have done that. And it seems to work but it would not give me any more infromation because when rasterized to the screen it would definatly be quantized to values of 0-255. Thanks for the tip to combine the three lines to one that is good to know.

From Adam_42:
The way DrawString() works is that it uses a texture to hold an image of each character in the font. It's that texture which is almost certainly 8-bit and causing you problems.
You want to draw something other than text - a triangle with a different colour at each vertex should do the trick.
[/quote]

This does not apper to be the problem. ...At least not the only problem. If create a surface like this:
[source lang="vb"]Dim tex As Texture = New Texture(device, 100, 100, 1, Usage.RenderTarget, Format A32B32G32R32F, Pool.Defualt)[/source]
And then do a device.ColorFill() as so:
[source lang="vb"]device.ColorFill(tex.GetSurfaceLevel(0), New Color4(1, 0.53, 0.52, 0.52)[/source]
I still get an all black output when you would expect an all 0.6 gray output. I have tried locking the surface and filling all the data with random bits using the Random.NextByte() method which seems to work fine because when I pass it through the Effect I get random colored pixels. (Note: not the same random pixels that get displayed when I don't pass it through the effect).

Is there some kind of Format settings for the device object its self? Do I (can I even?) need to set the backbuffer to a 128bit surface? I am really at a loss here.
Hi,

could it be that device.ColorFill does the conversion from the floating point color to a 8-bit color? At least, the D3D9 version of colorfill accepts only a dword color as a parameters. Of course it doesn't explain other parts of the problem.

Cheers!
Could you elaborate? I am using D3D9 via the SlimDX wrapper. Maybe SlimDX is taking a float color but converting it to 0-255 before passing it to DirectX?

This topic is closed to new replies.

Advertisement