• Advertisement
Sign in to follow this  

Deferred lighting problem

This topic is 2770 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I've been working on a deferred lighting renderer (or light-pre-pass) and I have managed to build the GBuffer, composite lights into a lighting buffer, and finally rendering the model again looking up into the light buffer. This stage caused me some problems but I've finally got it mostly correct. The problem I am left with is illustrated by the image. There is a very thin border around the whole object. I have read about the half-texel issue with d3d and implemented that - it did fix a bigger artifact I was getting, but as you can see the border is all around the object so I don't think it's alignment.

The image in the top left is the lighting buffer - I have cleared the background to a funky colour to highlight the problem - the border is that colour as you'd expect.

duck.jpg (21 KB)

Does any one have any suggestions as to what might be causing this?

I am wondering if it's because my render targets are 512x512 and my back buffer is 800x600? I tried setting my window to 512x512 but that caused massive texture corruption for some reason.

I've just thought as I've been typing this, is it maybe the lighting pass looking up the gbuffer wrong? I don't think I applied the half texel correction during that pass.

[Edited by - Noggs on August 21, 2010 7:07:00 PM]

Share this post

Link to post
Share on other sites
Nope multisampling is off. As a test I cleared the lightbuffer to white then rendered a fullscreen quad of black with the stencil test enabled. This also illustrated the same thin white border around the object so I don't think it's the lighting pass, it's got to be something wrong with my lookup. Here's the code for my forward render pass - it's pretty simple:

Vertex shader:

Out.Position = mul(In.Position, WorldViewProj); //apply vertex transformation

// transform x/y to range 0-1
Out.LookupUV.x = (((Out.Position.x / Out.Position.w) + 1.0f) / 2.0f);
Out.LookupUV.y = (((-Out.Position.y / Out.Position.w) + 1.0f) / 2.0f);
// apply half texel offset
Out.LookupUV += float2(0.5f / GBufferSize.x, 0.5f / GBufferSize.y);

Out.Texture = In.Texture; //copy original texcoords
return Out; //return output vertex

Pixel shader:

float2 uv = float2(In.LookupUV.xy);

// grab value from the lighting buffer
float4 lighting = tex2D( LightBufferSampler, uv );

Out.Color = float4( lighting.xyz, 1.0f );
return Out; //return output pixel

Share this post

Link to post
Share on other sites
I think I must be missing something in my understanding. I have boiled it down to a very simple test:

1. Setup matrices for perspective view with 800x600 aspect ratio
2. Clear 512x512 rendertarget to white
3. Render model with vsh transforming position by ViewProj matrix, psh returning pure black
4. Clear 800x600 backbuffer to blue
5. Render model using vsh/psh listed above
6. Observe black model with a thin white outline!!!
7. Hit the booze (not yet reached this stage)

Is there something I need to do if using rendertargets of a different size to my backbuffer? Do I need to do offset the pixels when rendering in step 3? Do I need to have specific samplerstate set to read the texture correctly?

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement