GPU optimisation: laydown z

Started by
15 comments, last by edwinnie 19 years, 10 months ago
quote:Original post by c t o a n
This refers to rendering the scene with color writes disabled. This translates into only writing the z-value of all polygons in the scene into the z-buffer.

Once the z-buffer is filled with all visible polygons, you re-render your scene with full detail, and if the polygon is invisible in the final result, it will fail the z-test and no more work will be put into it.

This is only really helpful if you are fill-bound in your engine, but this method really sucks if you''re transform-bound (because you have to transform all your polygons twice)

Chris Pergrossi
My Realm | "Good Morning, Dave"


Also if you cannot afford sending all that data through the rendering pipe twice, you can achieve roughly the same effect by sorting your batches front to back. Depending on how big your batches are it can either help or be pointless.

Don''t forget about occlusion queries either. If a surface has multiple passes, do an occlusion query on it on the first pass then a bit later in the rendering pipe, test the occlusion query, if no pixels were rendered don''t render the second pass.

There are tons of ways to improve fillrate performance.
-- Jon Olick
Advertisement
Btw, not all fillrate-limited scenes can benefit from this optimization. It has to be fillrate-limited *and* having a lot of overdraw. It doesn''t help on, for example, strategy games (view from above without any good occluder).
quote:Original post by Yann L
You could use a lowpoly approximated mesh instead of rendering the scene geometry twice. Prefilling the zbuffer with an occlusion skin (as in occlusion culling) would probably work pretty well - early z rejection is a kind of degenerated (perpixel) occlusion culling method after all.


Idea is nice, but practically you need to ensure that low res mesh has a "smaller" projection that the high res one. Else you might reject some pixels at the edges of objects that should have been rendered.
quote:Original post by vprat
Idea is nice, but practically you need to ensure that low res mesh has a "smaller" projection that the high res one. Else you might reject some pixels at the edges of objects that should have been rendered.

That''s what he meant by "occlusion skin".



Michael K.,
Co-designer and Graphics Programmer of "The Keepers"


We come in peace... surrender or die!
Michael K.
quote:That''s what he meant by "occlusion skin".


Yes, I had understood. However, practically, I don''t see how this would help. calculating an exact outline seems too expensive to be profitable (this outline has to be computed each time the point of view is changing on the original mesh).

Maybe you can fake and render the low res version for a point of view that is a bit further that the actual one. This could be what Yann meant.

Regards,

Vincent
I think he rather meant have a low-res version of your geometry, that''s enclosed by the hi-res version. like occluders used for occlusion culling...

for instance:


see his last post here: http://www.gamedev.net/community/forums/topic.asp?topic_id=226960
Actually fill rate limited is a strong word that covers multiple cases.

Doing a Z only pass doesn''t necessary help your fill rate either. Think that not only you pay the cost in transform (which can be limited with the occlusion skinning) and you also pay the price of the extra Z pass in fill rate (which can be accelerated on some modern hardware).

What you gain is :
- if you''ve got a hardware that is able to early reject pixels based on their depth (which they could or could not depending of what you''ve drawn before or how you did draw it) then any heavy pixel shaders (think sufficiently heavy so that those 10% extra pixels rendered does cost you 5/10% extra time between two frames.) will have to be cut off early.

Generally you arrange your rendering by opaque/transparent-materials-rendertargets such that little liberty on the spatial ordering is left. The early pass does allow you to not think about spatial ordering at the cost of extra vertex transform and a little extra fillrate (Z-Only, sometimes ambiant color). All those costs needs to be compared against the cost of having textured/rendered heavy pixel shaders that won''t be present on the screen.

For eg: FarCry wouldn''t necessarily benefit in their outdoor scene. Most of the pixels on the screen in some jungle scene are alpha/tested/blended plants billboards that do not benefit on some hardware of the early rejection. If a structure is entirely occluded by alpha blended objects, then the cost of drawing/transforming/texturing/shading this structure is lost). Alpha testing instead of alpha blending doesn''t necessarily helps because alpha tested polygons are bad occluders (don''t include them in your Z only pass).

Stencil rendering adds yet another dimension to the problem.

Everything has to be pondered and of course tested cause all hardwares do not react the same.

This topic is closed to new replies.

Advertisement