Down/Up scaling struggles

Started by
5 comments, last by jameszhao00 11 years, 11 months ago
Hi All. I am struggling to get my downscaling/upscaling working correctly. I have significant banding happening on my bloom source. I believe this to be related to my 4x4 downsampling.

I am using XNA and have corrected my half pixel issue by offsetting my position coordinates using the following code:



position.x = position.x - g_ClipSpaceHalfPixel.x;
position.y = position.y + g_ClipSpaceHalfPixel.y;
position.z = position.z;
position.w = 1.0f;


g_ClipSpaceHalfPixel is calculated by dividing 1.0f by the target rendertarget width and height.

My downscaling pixel shader looks like so:


float4 SWScalingPS( in float2 texCoord : TEXCOORD0 ) : COLOR0
{
float4 fSampleSum = 0.0f;
for(int iSample = 0; iSample < 16; iSample++)
{
fSampleSum += tex2D(PointSampler, texCoord + g_SampleOffsets[iSample]);
}
// Divide the sum to complete the average
return (fSampleSum / 16);
}


My offset generation code looks like the below. Also, i pass in the source render target dimensions as the parameters.


private Vector2[] Get4x4Offsets( int iWidth, int iHeight )
{
// Use the multiplicative inverse instead of dividing each
// offset by the Width and Height.
float fU = 1.0f / iWidth;
float fV = 1.0f / iHeight;
Vector2[] offsets = new Vector2[16];
int index = 0;
for( int y = 0; y < 4; ++y )
{
for( int x = 0; x < 4; ++x )
{
offsets[index].X = ( x - 1.5f) * fU;
offsets[index].Y = ( y - 1.5f ) * fV;
++index;
}
}
return offsets;
}



Because my texels and pixels are aligned, I still need to apply a .5 offset to get the texel center, right? I figured that is the case and that is why I am using x -1.5f since I am using Point sampling and not Linear sampling. However, I have also tried it using x - 1.0f and that did not seem to make any difference at all.

Any help would be much appreciated. Thanks in advanced.
Advertisement
Actually, it turns out it is not the down sampling. It is something with how I calculate my exposed color. I've narrowed it down to this code:



float3 CalcExposedColor(float3 color, float avgLuminance, float threshold)
{
// Use geometric mean to dynamically determine middle gray value.
avgLuminance = max(avgLuminance, 0.001f);
float middleGray = 1.03f - (2.0f / (2.0f + log10(avgLuminance + 1.0f)));
float linearExposure = (middleGray / avgLuminance);
float exposure = log2(max(linearExposure, 0.0001f));
exposure -= threshold;
return max(exp2(exposure) * color, 0.0f);
}


This is using some suggestions that MJP has recommended on this post: http://www.gamedev.net/topic/588388-glarebloom-effect-what-you-use/

I call that function and pass the result to a filmic tone mapping function.


float3 ToneMapFilmic(float3 color)
{
color = max(0, color - 0.004f);
color = (color * (6.2f * color + 0.5f)) / (color * (6.2f * color + 1.7f)+ 0.06f);
// Result above is in gamma space since it is baked into the equation,
// so put it back in linear space.
return pow(color, 2.2f);
}


The bright pass is done as follows:


float4 BrightPassPS( in float2 vTexCoord : TexCoord0 ) : COLOR0
{
float4 vSample = tex2D( PointSampler, vTexCoord );
float fAdaptedLum = tex2D( AdaptedPointSampler, float2(0.5f, 0.5f) ).r;

float3 vColor = CalcExposedColor(vSample.rgb, fAdaptedLum, g_BrightPassThreshold);
vColor = ToneMapFilmic(vColor);

return float4( vColor, 1.0f );
}


I'll keep digging but any suggestions are welcome.
Got some more time to look at this issue. It turns out it has nothing to do with my post processing framework. The issue was just exacerbated by the exposure adjustments. The issue is in my lighting buffer. Simply outputting n dot l causes a faint banding. Even more annoying is the banding becomes animated with the adapted luminance. So frustrating, I still don't know what is causing this.
Are you using an 8-bit per component render target? If so, this may well be expected behaviour owing to the lower colour precision.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I am using HdrBlendable for my lighting buffer which on windows is 16bits per channel. I am also just using a single light with a color of white (255,255,255) for this test scenario, so in reality there shouldn't be any banding at all. I'd post a screenshot if I could figure out how to attach an image. I thought gamedev support uploading images do your profile's gallery, but I guess not.
Here is an image of the banding issue. The noise you see as the N dot L gets closer to 0 is caused by the best fit normals. I am not sure what I am doing wrong there, but I tried using a different normal storage technique and it did not affect the banding at all, so best fit normals is not causing this issue.

banding.jpg
Highly possible it's because you're hitting byte precision (i.e. colors in adjacent bands are things like 155, 156, 157, etc, even though your PS is outputting colors like 155.00000001, 155.00000232, etc)

Disable the best fit normals, take it into photoshop, and check the color values to confirm.

This topic is closed to new replies.

Advertisement