Jump to content
  • Advertisement
Sign in to follow this  
RobMaddison

Transparency and the Z buffer

This topic is 2125 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

With DX9, if I render with transparency, do the fully transparent areas still write to the z buffer?

I'm asking because I'm changing my terrain layer rendering slightly. I'm aiming to render each individual terrain layer in a terrain patch (33x33) using alpha transparency (patches are rendered front to back but layers within that patch may be in any order), but if I render a detail layer (which will have faded out edges where the adjoining layer merges in), and then render another layer 'behind' that layer (imagine a slopy terrain tile), will the second layer's pixel shader still draw through areas of transparency in the first layer? If the areas of transparency still write to the z buffer and pixels still get discarded if they fail the z buffer test then I don't have an issue.

Hope that explained it well enough...

Share this post


Link to post
Share on other sites
Advertisement

Hello 

I'm afraid the z-buffer is not suited for transparent rendering ....

 

To render correctly transparent primitives against opaque primitives, you read(-only) the depth buffer. (transparent areas don't fill the z-buffer)

To render correctly transparent primitives against transparent primitives, you can't rely on the z-buffer, you have to depth-sort your geometry or use a depth-independant rendering technique.

 

Hope I understand correctly your pointrolleyes.gif

Edited by Tournicoti

Share this post


Link to post
Share on other sites
I wondered whether I might have not explained myself properly. I understand the issues with transparency when you have opaque objects, eg, if you have a person behind a window, you'll need to render the person (or opaque object) first, but my situation is ever so slightly different...

The objective is to end up with no transparency at all, the only reason I use alpha blending for this is to blend between the edges of my terrain layers.

My current method does it like this:

Render the terrain patches in full (i.e. 32x32 tiles) using the 'standard' detail texture and leave any areas that should have different detail textures as black but with fade out. This stage is done with alpha blending disabled.

Then with each successive terrain detail layer (you can have as many detail layers as you like per terrain chunk using this method), draw the terrain layer (which will be a pre-determined indexed set of triangles using the standard 32x32 patch) with alpha blending enabled. This then allows me to blend the new layer with what's already been rendered, rinse and repeat for all terrain layers.

The way I blend at the edges is I pass in a texture the same size as the entire terrain containing the terrain material at each pixel. Using this and a 32x32 pixel blending 'template' texture, I fade out the edges where the material is not the same as the one I'm currently rendering.

This works great and looks fantastic but because my first opaque render with the base detail texture still renders the entire terrain patch (albeit some of it ends up being black), I'm effectively getting one entire patch overdraw per patch.

What I'd like to do is change it so that it draws the initial layer (the standard terrain detail layer) as a layer too instead of the entire patch. That way, I wouldn't have any overdraw apart from the edges between the layers.

If I look down at a small area of the terrain from above that is completely covered by a layer, I'm getting a full screen of overdraw which I think is hurting my performance.

My terrain layers (with transparency) would be the first thing rendered on screen before anything else but my issue is that if I draw the very first layer and it is a large area over, say, a rolling set of bumps, the translucent edge nearest the viewer may end up with bits of the inner part of the layer (i.e. opaque parts) showing through it. This wouldn't work well when I come to blend the adjoining layer as I'll be blending with a messy edge.

If, when I render that first layer, the transparent edges don't show up what's potentially directly behind the edge because of the zbuffer, I should be ok.

It's v hard to explain...

Share this post


Link to post
Share on other sites

So the z-buffer doesn't care if you have transparent geometry or not, if a pixel is rasterized its depth will be outputted to the z buffer, (assuming z buffer is enabled and such.) Now you can avoid drawing those pixels entirely using alpha testing. Alpha testing will allow you to determine a threshold for minimum alpha, and any pixels with alpha below that threshold will not be rasterized. This will cause a cutout edge where the alpha drops off, but it will allow you to use depth testing.

Share this post


Link to post
Share on other sites
Thanks for the replies so far. So forgetting about the terrain context for a minute, lets assume I'm rendering a lamp as a whole object.

The lamp has a translucent lampshade and a solid lamp stand which comes from its texture map. If I render it from above it at a camera position where the lampshade would normally obscure the lampstand, would the lampstand (which has full opacity) show through the lampshade (which has 50% opacity). I would assume so, but doesn't that mean that if the pixels are being written to the depth buffer, the lampstand pixels SHOULD be ignored because the lampshade pixels are closer and therefore would be closest in the depth map

Edit: probably important to mention the lamp is drawn as a whole draw call.

Share this post


Link to post
Share on other sites

Alpha blending is dependent on the color already in the color buffer. So if you draw the lamp stand first, with z buffer write/test on, and then draw the lampshade next with z buffer write/test on, then the lamp shade will render fine, and the lamp shade's pixels would replace the lamp stand's pixels in the z buffer. The lamp stand's pixels won't have been rejected, because they have already passed the z test.

Now if you reverse that logic, then the lamp stand would not draw anywhere the lamp shade and lamp stand overlap, because the lamp shade pixels are in the z buffer, and the lamp stand pixels are behind them, thus failing the z test.

Z testing happens before the pixel shader is called. If the pixel shader is called, then the pixel will be outputted to the color buffer, (generally). It will not reject these pixels just because you've drawn something nearer afterwards, it will just draw the pixel again, (aka overdraw).

Share this post


Link to post
Share on other sites
Thanks for that. I've realised that my new method isn't going to work, I'll have to think of another way to do it.

There was one thing I thought of but that involves almost double the number of draw calls but would result in barely any overdraw.

Is overdraw particularly expensive? Or is it just treated as another draw call?

Share this post


Link to post
Share on other sites

Overdraw can be very expensive, but because it's just another draw call. 

Think of it this way. A 1080p resolution means the pixel shader gets hit 1million+ times. Now that means the most efficient you can be is exactly one pixel shader call for each pixel. Overdraw happens when you have to draw to the same pixel more than once. Alpha blending requires this. 

So the short answer is that, it depends on how expensive your pixel shader is.

Alpha blending can be expensive, texture fetches and be expensive, a whole mess of thing can make overdraw expensive.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!