Help DownSampling with HLSL

Started by
9 comments, last by josiah7 15 years, 10 months ago
This seems to down size an image to 1/4 size of the input image by reading every fourth texel across and down:

return tex2D(inputTex, Uv.xy * float2(4.0, 4.0));
However I want to down size AND down sample for HDR luminance, so I am trying for an output of 1/4 size by writing every 4th texel while adding the colors of the surrounding texels that get left out. But I am getting a weird result instead! Here is what I am doing:

float4 DownsamplePS( float2 Tex : TEXCOORD0 ) : COLOR0 
{
   // start with an empty color
   float4 color = float4(0.0f, 0.0f, 0.0f, 0.0f); 
       
   // accumulate value off all colors in 4x4 square
   color += tex2D( orgRT, Tex + float2( 0.0 , 0.0 )); // get (0,0) color
   color += tex2D( orgRT, Tex + float2( 1.0 , 0.0 )); // add (1,0)'s color to it
   color += tex2D( orgRT, Tex + float2( 2.0 , 0.0 )); // add (2,0)'s color to it
   color += tex2D( orgRT, Tex + float2( 3.0 , 0.0 )); // add (3,0)'s color to it
   color += tex2D( orgRT, Tex + float2( 0.0 , 1.0 )); // add (0,1)'s color to it
   color += tex2D( orgRT, Tex + float2( 1.0 , 1.0 )); // add (1,1)'s color to it
   color += tex2D( orgRT, Tex + float2( 2.0 , 1.0 )); // add (2,1)'s color to it
   color += tex2D( orgRT, Tex + float2( 3.0 , 1.0 )); // add (3,1)'s color to it
   color += tex2D( orgRT, Tex + float2( 0.0 , 2.0 )); // add (0,2)'s color to it
   color += tex2D( orgRT, Tex + float2( 3.0 , 2.0 )); // add (1,2)'s color to it
   color += tex2D( orgRT, Tex + float2( 2.0 , 2.0 )); // add (2,2)'s color to it
   color += tex2D( orgRT, Tex + float2( 3.0 , 2.0 )); // add (3,2)'s color to it
   color += tex2D( orgRT, Tex + float2( 0.0 , 3.0 )); // add (0,3)'s color to it
   color += tex2D( orgRT, Tex + float2( 1.0 , 3.0 )); // add (1,3)'s color to it
   color += tex2D( orgRT, Tex + float2( 2.0 , 3.0 )); // add (2,3)'s color to it
   color += tex2D( orgRT, Tex + float2( 3.0 , 3.0 )); // add (3,3)'s color to it
 
   // Adjust the accumulated amount by lum factor
   float lum = dot(color, luminance_factor); 

   // get average of the 16 samples
   lum *= 0.0625;

   return lum;
}
Any advice is appreciated, Thanks. [Edited by - josiah7 on June 21, 2008 8:13:03 PM]
Advertisement
I notice you're using unnormalized coordinates. Should they perhaps be normalized coordinates instead?
Thanks! I really appreciate the help!

That helped a lot, I have simplified the shader by only downsizing by half. Here is the new code:

float4 GreyScaleDownSample( float2 UV : TEXCOORD0 ) : COLOR0{  float offset1 = 1/1280; // create normalized coordinate offsets from input texture   float offset2 = 1/720;  float4 color = 0.0f;  float3 weight = float3( 0.299f, 0.587f, 0.114f );  UV.xy = UV * float2(2.0, 2.0); // offset each read times 2 in xy to halve the output  color += tex2D( sourceRT, UV);                                // get the color of the target texel   color += tex2D( sourceRT, UV + float2( -offset1, 0.0f ));     // add the color of the texel to the left  color += tex2D( sourceRT, UV + float2( -offset1, -offset2 )); // add the color of the texel to the left and above  color += tex2D( sourceRT, UV + float2(  0.0f   , -offset2 )); // add the color of the texel to the above  // Adjust the accumulated amount by lum weight  float lum = dot(color, weight);   // get average of the 4 samples  lum *= 0.25;  return lum;}


I calculate the offsets in the shader for clarity of the method. Once I finish it, I will pass these to the shader instead.

Does this seem correct? It seems to be working, but it doesnt look like similar code I've examined.Especially
" UV.xy = UV * float2(2.0, 2.0);"
But thats the only way I have found to reduce the output size.
Any advice please?

Thanks.

[Edited by - josiah7 on June 22, 2008 2:10:00 PM]
In my above code, should I be multiplying the offsets by "0.5" to get the texel centers? The result looks ok, but I am not sure my method is right.
One thing that is often going wrong is that you have to adjust the filter kernel size to the size of the source and target render target. You can't downsample 640x480 to 64x64 with a 16-tap filter kernel.
It still dosent seem to be working correctly :(

It should be averaging the pixels more each time, but it dosent seem to be doing it. I am trying to get average luminance by recursive downsampling by 4X each time. I currently have a lot of flicker in the 1x1 final sample. Here is a image of the samples created by the downsampling filter.

And here is the code I am using for downsampling 256 -> 64;
float4 DownSample( float2 UV : TEXCOORD0 ) : COLOR0{  float TexelSize = 1/256; // for a 256x256 source image  float3 vSample = 0.0f;  UV.xy = UV * float2(4.0f, 4.0f); // offset each read x4 to scale down output image to 1/4 size  // add column 0 colors  vSample += tex2D( sourceRT, UV + float2( 0.0f * TexelSize, 0.0f * TexelSize )); // get the color of the texel row 0 column 0  vSample += tex2D( sourceRT, UV + float2( 1.0f * TexelSize, 0.0f * TexelSize )); // add the color of the texel row 1 column 0   vSample += tex2D( sourceRT, UV + float2( 2.0f * TexelSize, 0.0f * TexelSize )); // add the color of the texel row 2 column 0  vSample += tex2D( sourceRT, UV + float2( 3.0f * TexelSize, 0.0f * TexelSize )); // add the color of the texel row 3 column 0  // add column 1 colors  vSample += tex2D( sourceRT, UV + float2( 0.0f * TexelSize, 1.0f * TexelSize )); // add the color of the texel row 0 column 1  vSample += tex2D( sourceRT, UV + float2( 1.0f * TexelSize, 1.0f * TexelSize )); // add the color of the texel row 1 column 1  vSample += tex2D( sourceRT, UV + float2( 2.0f * TexelSize, 1.0f * TexelSize )); // add the color of the texel row 2 column 1  vSample += tex2D( sourceRT, UV + float2( 3.0f * TexelSize, 1.0f * TexelSize )); // add the color of the texel row 3 column 1  // add column 2 colors  vSample += tex2D( sourceRT, UV + float2( 0.0f * TexelSize, 2.0f * TexelSize )); // add the color of the texel row 0 column 2  vSample += tex2D( sourceRT, UV + float2( 1.0f * TexelSize, 2.0f * TexelSize )); // add the color of the texel row 1 column 2  vSample += tex2D( sourceRT, UV + float2( 2.0f * TexelSize, 2.0f * TexelSize )); // add the color of the texel row 2 column 2  vSample += tex2D( sourceRT, UV + float2( 3.0f * TexelSize, 2.0f * TexelSize )); // add the color of the texel row 3 column 2  // add column 3 colors  vSample += tex2D( sourceRT, UV + float2( 0.0f * TexelSize, 3.0f * TexelSize )); // add the color of the texel row 0 column 3  vSample += tex2D( sourceRT, UV + float2( 1.0f * TexelSize, 3.0f * TexelSize )); // add the color of the texel row 1 column 3  vSample += tex2D( sourceRT, UV + float2( 2.0f * TexelSize, 3.0f * TexelSize )); // add the color of the texel row 2 column 3  vSample += tex2D( sourceRT, UV + float2( 3.0f * TexelSize, 3.0f * TexelSize )); // add the color of the texel row 3 column 3  vSample *= 0.0625f;  // divide by 16 to get the average of the 16 samples  return float4(vSample, 1.0f); }

Can anyone please help out here? What the heck am I doing wrong?
Big Thanks for any help!

BTW, I am not using DirectX.
Why are you using a hard-coded texel size? It changes for every pass.
Thanks for Looking MJP.

I put 1/256 in the code I posted so anyone could see how I was calculating it
for a 256x256 texture. I actually use (1 / texture.width) and pass it to the
shader.

Shouldn't each pass become more average gray? It dosent look like its moving
towards an average at all. This is causing a lot of flickering with the 1x1
sample.

Thanks again.
If you have to multiply your texture coordinates by the scale factor, your quad that you're using for rendering is too big for the render target. Usually the quad is set up so that it exactly covers the entire area of the RT, which lets you use UV's that go from 0 to 1 no problem. Hopefully this answers your question from the previous post...

As for the correctness of your results...they seem to at least roughly match the result I get by bilinearly downscaling your images in photoshop.
Thanks MJP, appreciate it greatly.

Filtering in image software and comparing results, great idea!
I dont understand why the 1x1 changes so much (flickering) so quickly.
Perhaps I need to start with a larger image than 256x256? Or maybe try time
weighted luminance like in the DX HDRLighting example.

Thanks.

This topic is closed to new replies.

Advertisement