[XNA] Stencil buffer & Render Targets

Started by
7 comments, last by Hyunkel 14 years ago
I was thinking about using stencil masking for 2 things in my deferred renderer. 1) avoid light calculations on pixels that have no geometry. 2) draw a skybox after drawing the scene using the deferred renderer. I quickly noticed that this wasn't working (stencil was always 0). After some googling I found out that XNA clears the stencil buffer every time you switch render targets, which is kind of a problem since you switch rendertargets quite alot when using deferred shading. Is there a way to avoid this? If not, what is the recommended way of doing this? The only thing I can think of right now is to add a mask to the GBuffer and reconstruct the stencil buffer after switching render targets. This would be such a waste though, and I'm hoping to avoid doing this. Thanks, Hyu
Advertisement
What platforms are you targetting? On the PC you can just specify RenderTargetUsage.PreserveContents when creating render targets and everything will work like you'd expect it to. On the 360 you're kinda screwed...you can have the runtime automatically copy the color and depth information back into eDRAM (which costs some performance), but it will never copy the stencil information back in.
Ah fantastic :)
I'm only targeting Windows Platform indeed, so this should solve my issues.

Many thanks,
Hyu
I tried this, but it seems I'm still doing something wrong.
Probably a stupid mistake since I'm working with stencil buffers for the first time.

This is what I'm doing atm:

manager.PreferredDepthStencilFormat = DepthFormat.Depth24Stencil8;device.PresentationParameters.RenderTargetUsage = RenderTargetUsage.PreserveContents;


device.RenderState.DepthBufferEnable = true;device.RenderState.CullMode = CullMode.CullCounterClockwiseFace;device.RenderState.AlphaBlendEnable = false;device.RenderState.StencilEnable = true;device.RenderState.StencilFunction = CompareFunction.Always;device.RenderState.StencilPass = StencilOperation.Replace;device.RenderState.ReferenceStencil = 1;


Then I render my geometry to the GBuffer, which should set the stencil for all pixels that "contain" geometry to 1 right?

Then, before drawing my lights I set:

device.RenderState.DepthBufferEnable = false;device.SetRenderTarget(0, lightRT);device.RenderState.StencilEnable = true;device.RenderState.StencilFunction = CompareFunction.Equal;device.RenderState.ReferenceStencil = 1;device.Clear(Color.TransparentBlack);device.RenderState.AlphaBlendEnable = true;device.RenderState.AlphaBlendOperation = BlendFunction.Add;device.RenderState.SourceBlend = Blend.One;device.RenderState.DestinationBlend = Blend.One;device.RenderState.SeparateAlphaBlendEnabled = false;


Then I draw my lights.
However, the stencil test rejects all pixels at this point.
If I set
device.RenderState.ReferenceStencil = 0;

then it outputs all pixels, so for some reason the stencil is set to 0 for all pixels, even though pixels "containing" geometry should be set to 1.
At first I thought it was because I was clearing the device (which shouldn't be necessary at this point I think) but even if I don't clear the device, the problem still occurs.

What am I doing wrong? :)

Thanks,
Hyu
I've been trying to get this to work for quite some time now, but it just doesn't want to.
Not sure if I'm really doing something wrong, or if there's some kind of problem preventing RenderTargetUsage.PreserveContents from working.

Is it ok if I use a channel from my G-Buffer instead?

for example, if I clear my Color Texture to (0,0,0,1), and set the alpha channel to 0 when I render geometry, I could perform a stencil test while rendering lights like this:

if(tex2D(colorSampler, texCoord).a == 1.0f){	discard;}else{	//do light rendering here}


Doing this is quite a waste though, because I'm not only loosing an 8 bit channel, but it also causes additional G-Buffer lookups. I'm not really sure if I will gain any speed by doing this, it feels like using a proper stencil buffer should be faster.

How well are the Bgra1010102 or Rgba1010102 formats supported?
10bit/channel could be enough to store normals, so this would leave me with a spare 10 bit channel, and a 2bit channel to use for stencil masking.
Although this is slightly better then the above method, it doesn't really fix my problem, and it is only acceptable as long as such formats are supported on all modern graphic cards.

Any thoughts on this?
Thanks,
Hyu
Are you specifying PreserveContents when creating your G-Buffer RenderTargets?

If you want to know which GPU's support certain surface formats and certain caps, there's a sample in the SDK called "ConfigSystem" that has a spreadsheet containing the caps for a huge range of GPU's.
Yes, this is my rendertarget setup:

            colorRT = new RenderTarget2D(device, width, height, 1, SurfaceFormat.Color, RenderTargetUsage.PreserveContents);            normalRT = new RenderTarget2D(device, width, height, 1, SurfaceFormat.HalfVector4, RenderTargetUsage.PreserveContents);            depthRT = new RenderTarget2D(device, width, height, 1, SurfaceFormat.Single, RenderTargetUsage.PreserveContents);            lightRT = new RenderTarget2D(device, width, height, 1, SurfaceFormat.HalfVector4, RenderTargetUsage.PreserveContents);            finalRT = new RenderTarget2D(device, width, height, 1, SurfaceFormat.HalfVector4, RenderTargetUsage.PreserveContents);


I think I'm doing something wrong, or misunderstanding something.
Why is it actually important to set RenderTargetUsage for other render targets?
Isn't the stencil buffer global?
I always thought there was only one stencil buffer, and you'd access the same regardless of which rendertarget you're rendering to...

Actually, reading my last statement makes me realize that this makes no sense because you can create rendertargets of different size then your screen... oh dear...

So, does only the "screen rendertarget" have a stencil buffer, and when rendering my geometry, I actually have to render to the "screen" too, so that I set the stencil for my pixels to 1, and it wouldn't do so if I only render to the color,normal and depth rt's?

Or does every rendertarget have it's own stencil buffer?
... kinda doubt that, feels kinda weird.

Anyways, as you can see I'm properly confused about this -_-
Any help making me understand how stencil buffers really work, in relation to rendertargets would be greatly appreciated.

Thanks,
Hyu
Okay this is actually a bit confusing, because the way XNA tries to make things work across both the PC and the Xbox 360. I'll try to explain how things work on a low-level on both platforms, and that will hopefully make it clear as to what XNA is trying to do.

On the PC in D3D9, a DepthStencilBuffer is just another section of memory like a render target. You specify which render target(s) you're rendering to, and you also specify which depth stencil buffer you want to use for depth and stencil writes/testing. There's no concept of "preserving" render target or depth-stencil contents, since those both live in video memory and aren't cleared unless you explictly request them to be cleared.

On the Xbox 360, the GPU has a special bit of memory known as eDRAM. The GPU always renders color and depth-stencil data to eDRAM, so this is essentially where RenderTarget's and DepthStencilBuffer's "live". When you're rendering to them they stay in eDRAM, and when you're done the render target data gets copied out into main memory so that you can sample it as a texture. This is why the default behavior for render targets is to discard everything when you set it...the color and depth data would have to be manually copied back into eDRAM for it to be there. On the PC this behavior is emulated with a clear, since it doesn't happen by default. This is also why the DepthStencilBuffer "clear" behavior is tied to the RenderTargetUsage for a particular RenderTarget: it's because the depth information has to be manually copied back in when you set that particular render target.

So yeah there's definitely no "global" stencil buffer. It's very much tied to the DepthStencilBuffer you have set on the device. And because of XNA's cross-platform issues, it's also tied to what usage settings you specify for your RenderTarget's.

Oh and there's more bad news...this is changing quite a bit in XNA 4.0. In XNA 4.0 there's not even DepthStencilBuffer's anymore...it's an implicit part of your RenderTarget. There's more info here
Thank you for taking time to explain this.

I understand why XNA has an option to preserve contents now, and why default behaviour is to discard them.

Not entirely sure what exactly I'm doing wrong yet.
I guess I'll make a new project to test stencil behaviour step by step, to see exactly which part I'm messing up on.

This topic is closed to new replies.

Advertisement