• Advertisement

Archived

This topic is now archived and is closed to further replies.

Getting the z-buffer into the alpha buffer

This topic is 5743 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi guys, I was wondering if anyone knew of a technique for getting the contents of the z-buffer into either a standard RGBA buffer (a texture) or of getting the high-order byte into the alpha component of a buffer (a texture or a frame buffer)? Basically, there are some easy depth-of-field (and other) effects you can achieve by using the z-buffer values as alpha components. I know this could probably be done much more easily with a pixel or vertex shader, but I need something that''ll work on various hardware (PC, Xbox, GameCube, PS2) so it needs to be a multi-texturing trick of some kind (I''m guessing it''ll take that route). Thanks for any suggestions, theories, or facts!

Share this post


Link to post
Share on other sites
Advertisement
you are going about things completly wrong. see nvidias site on depth of field effects. (look at the effects browser).

do research on depth of field effects. you dont use the zbuffer as an alpha channel to modify textures. you actually draw the vertices that are farther back multiple times with varying offsets to achieve the "blur" effect.

EDIT: btw dealing with a hardware vram zbuffer directly is slow (especially reading) and api dependent. in fact some hardware use hardware compression techniques thus you cant readily access the zbuffer without screwing things up or slowing things immensily.

[edited by - a person on May 30, 2002 3:52:55 PM]

Share this post


Link to post
Share on other sites
I don''t really see what exactly you want to achieve with with that function, but if it helps (and if you use OpenGL), have a look at this extension, it seems to do exactly what you want (and since it''s a card local operation, it''s presumably very fast): NV_copy_depth_to_color

I have never used this function, so I really don''t know how well it performs. And it''s nVidia only. And (AFAIK) OpenGL only (no Xbox). Anyway, if all you are trying to do is depth-of-field, then I would highly recommend a different approach.

/ Yann

Share this post


Link to post
Share on other sites
Actually, I wouldn''t say I''m going about things completely wrong.

I believe the NVidia effect you''re talking about makes use of a vertex shader (with a pixel shader for smoothing) to achieve their effect. And yes, that is a good technique (and closer to the actual phsysical process) if the hardware supports it. As I mentioned in my e-mail, I was looking for a technique that works across different hardware architectures, which means I''m looking for something that relys on multi-texturing as opposed to hardware specific features such vertex/pixel shaders.

I actually have done research on depth of field effects, and that''s why I''ve reached this technique. If this is a novel technique that you haven''t seen elsewhere, then perhaps you should look deeper.

Since I posted my question, I''ve done additional research (and consulted with my colleagues across the pond) and have come with the following info:

On the PS2 and GameCube there''s actually hardware functionality for doing exactly the blit I was looking for (z-buffer to texture). Even better, on the PS2 the blit is done as an 8-bit texture, so you can basically setup the CLUT to represent whatever focal planes you need. It''s really easy on the PS2 because you can address whatever portion of VRAM you want, in whatever format you want.

On the Xbox, there are also some techniques for getting this result, mainly because the pixel formats are all published and fixed (and accessible through Direct3D). Unfortunately, this does not apply to the PC, where the z-buffer is notoriously tucked away from view and direct access.

Of course, in any situation where the z-buffer is inaccesible, you can always reconstruct it in a renderable surface by rendering your geometry with fogging.

Additionally, the per vertex routine mentioned runs in linear time relative to the number of vertices, while the post-processing technique I''ve described is a fixed constant time operation that is in no way dependent on the complexity of the rendered scene (a huge advantage in my mind).

Of course, the technique I''ve described does limit only to 8-bit of depth, but that''s not a problem if you provide enough samples.

Share this post


Link to post
Share on other sites
That NVidia extension is very similar to what the PS2 and GameCube do in hardware (GCN actually provides blits for the high 8 bits, the low 8 bits, the high 4 bits, the middle 16 bits, the lower 16 bits, etc.).

The PS2 isn''t necessarily a hardware trick, it mainly takes advantage of the fact that any part of VRAM is accessible and can be manipulated as whatever format you choose (assuming you meet certain alignment conditions).

So, that''s two people who say this approach is not a good approach for depth-of-field, yet I''ve not seen any suggestions of how to do it otherwise (without using shaders). Also, the approach I''ve described has actually been used in video games sitting on the shelves today... hmm... maybe this a trick that needs to be written about, because people around here apparently haven''t been exposed to it? Of course, that may be completely due to the fact that it''s *generally* a pain in the arse to do it on the PC...

Share this post


Link to post
Share on other sites
quote:

So, that's two people who say this approach is not a good approach for depth-of-field, yet I've not seen any suggestions of how to do it otherwise (without using shaders). Also, the approach I've described has actually been used in video games sitting on the shelves today... hmm... maybe this a trick that needs to be written about, because people around here apparently haven't been exposed to it? Of course, that may be completely due to the fact that it's *generally* a pain in the arse to do it on the PC...


I have heard of that trick, and it's not necessarily bad. But it's a trick, and doesn't look as good as shader/multisample based DOF (from the shots I saw, by far). And since it's very hardware and vendor specific (a different implementation on every target platform), then why not directly use shaders ? Another problem is that it consumes quite some fillrate, esp. if you use lots of layers.

/ Yann

[edited by - Yann L on May 30, 2002 5:31:55 PM]

Share this post


Link to post
Share on other sites
Of course, I would prefer a shader/multisample approach, but that''s not possible on the PS2 and GCN (it would be my preferred approach on the Xbox and PC).

Even though the technique is implemented differently on each platform, it''s the same technique so it produces similar results (that''s what I''m looking for, consistency across hardware).

Fill-rate is not so much an issue if you render in strips (particularly on the PS2). Of course, my focus is achieving a good post-processing effect.

Another advantage of this technique is in combination with image-based rendering and pre-rendering mattes (another area of my interests).

Share this post


Link to post
Share on other sites
Yes, of course it could let you reuse far away background geometry in a few consecutive frames without need to rerender them - that can actually be a speedup.

Hmm, but I don''t really know... Aren''t the transitions between DoF layers visible (different strength of blur) ? Or do you interpolate them ? Don''t you get artifacts, when moving slowly ?

Oh, BTW: there could be another way to turn depth values into alpha values. AFAIK, the new ARB_depth_texture extension can treat depth textures as luminance textures. Using a simple combiner setup, the luminance value can be loaded onto the alpha channel. And if you use the render-to-texture feature, you could directly render to the depth texture, saving you the memory copy associated with the before mentioned NV extension.

/ Yann

Share this post


Link to post
Share on other sites
I''m not doing the DoF layers technique (you can find references to it on Gamasutra). That technique involved using mipmap levels and DoF layers to to do a fast, fake depth-of-field. Unfortunately, that does not have a quality look I''m wanting.

The technique I''m using is the following:

a) Render the scene as normal to either your back buffer or a texture.

b) Copy the highest order byte of the z-buffer into the alpha component of your frame buffer. If you can do the copy from z to alpha as a texture, you can make the z-texture an 8-bit with the CLUT representing your focal distances. Otherwise, you have to do a slightly more muddled render to get variable focal planes.

c) Copy the frame buffer into the front buffer (or whereever your final rendering destination is) using the z-alpha component (with out-of-focus pixels have a low/transparent value and focused pixels having a high/opaque value). Jitter the copy using any of various techniques. Make sure one of the copies is unjittered so that some pixels line up where they are expected to be.

This produces a decent depth-of-field effect (I wish I could remember which titles use this so I could point to some screenshots). It also is nice because it makes no assumption about how the data got into the frame buffer and depth buffer (either from pre-rendered art like Final Fantasy VII, or high order implicit surfaces, or video playback, or whatever).

This technique will give you 256 distinct levels of focus and can also produce some other interesting effects by further manipulating the alpha/depth channel before part C above. For example, you could have focus soften around direct, bright light sources, or you could do some interesting dissolves.

Share this post


Link to post
Share on other sites
so you answered yoru own question. you know of various methods you can use on the different hardware you are targetting. i said that each api will handle this differently, and there is no single way of doing this. the "vertex shader" approach will work on ANY system in which you can tell the hardware the vertices you are rendering (ie render things mulitple times and such). you dont need vertex shader support per se, since they can be emulated in software without much speed loss.

your algo intrigues me, and i think i will see if there is any fast way of doing this using d3d8 without using vertex shaders (though someone already mentioned how in opengl).

thank you for explaining it further, i did not even see that method of doing things.

Share this post


Link to post
Share on other sites
Yep, interesting algorithm. I''ll have to try that out some day. Do you have any screenshots of how your implementation looks like ?

/ Yann

Share this post


Link to post
Share on other sites
Not right now, though I''ll try to post some later. I''ve got it running on the PS2. The artwork, though, doesn''t exactly show off the effect very well, so I''ll try to stick some good DoF artwork in there and generate a screenshot for you guys.

Does this board support inlining images, or should I just put it on a website?

Share this post


Link to post
Share on other sites
you can directly inline them, if you want. Just use plain html:

< img src="...your URL..." >

(remove the spaces)

[edited by - Yann L on May 31, 2002 11:36:50 AM]

Share this post


Link to post
Share on other sites
1. For NGC take a look at the tg-shadow3 sample in the Dolphin SDK - this has code which copies Z buffer into a normal texture.

2. Though I''m not a PS2 coder, I''m sure I''ve heard someone do something clever to essentially read Z - it may have been to do with using a 32bit texture which had the same address as a PSMZ32 ZB or something like that.

3. Generally being able to read Z isn''t something you can ever rely on for the PC. Some chips such as nVidia''s newer ones will let you do true, fast Z buffer to texture copies. Many others won''t though. The many different ways IHVs store their Z buffers is another hinderance (what do the 8bits in your alpha mean with respect to the Z format used by the chip? most significant integer of the Z?, least significant integer?, float exponent?, low 8 of mantissa?, fraction of a fixed point etc).
Finally consider chips such as the PowerVR/Kyro series which don''t actually have a Z buffer in memory at any time!!

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Share this post


Link to post
Share on other sites

  • Advertisement