Sign in to follow this  
nefthy

Polygon reduction

Recommended Posts

I read aboud doom III rendering the sceen in multyple passes. It fills the z-buffer in the first pass by drawing only the geometry of the sceen. It does so do use the early out feature modern gfx cards have. Now I made a quick benchmark, but it seems to reduce performance rather than increase it. (yea... my driver is crapy.. but until ATI fixes their stuff, I cant chang it) Anyway, I was wondering if yousing a reduced model for the first pass would do any better. Is there any polygon reduction algorithm that garanties the reduced polygon model, cann always fit into the original (in other words never make it bigger)?

Share this post


Link to post
Share on other sites
This method is only going to give you a speed up it the cost of drawing individual fragments is high (as in Doom 3). For less fragment intensive, non-multi-pass scenes it will almost certainly derease preformance.

Consider this: In Doom 3 depending on your graphics card you may potentially end up doing 3-4 passes on the entire scene every frame PER LIGHT! So if you've got 4 lights covering 90% of your screen, you're doing 12-16 passes for a single frame! That's expensive. By drawing the entire scene to the Depth Buffer first (No color/texture/lighting info) they ensure that those expensive fragments are ONLY generated for visible pixles, and in this case it's worth it.

On your standard OpenGL app, though, with no fragment shaders and no multipass operations if you do this "early Z out" method you're really doing nothing more than drawing your scene twice, which is obviously going to be slower than just ripping through it in a single pass. Even for scenes that DO include shaders, if the cast per fragment is relatively low then you're probably going to still take a small performance hit from that initial pass.

A good comprimise that has been popular lately is to use that first pass as an ambient pass. In a scene with per pixle lighting it can be benifitial to first draw the entire scene with nothing but your ambient light, which is cheap to calculate, adds to the scene visually, and gives you that early out buffer "for free." That way even if you're not getting a speed boost from that pass it's still adding to the overall effect, and as the number of lights (and consequently passes) increases you see more and more benifit from the early out.

EDIT: Heh, I guess I should read the WHOLE post before answering. In answer to your question - No, it probably wouldn't help. One of the key benifits to the Z-Pass method is that after the initial pass you simply turn off Z writes because the correct depths are already in there. If you used a reduced poly model you would have to write to the depth buffer with every pass, which is likely in itself to nullify the benifit. Also, you still end up overdrawing a lot of fragments around the edges of your meshes, which again adds to the cost. Also, it would require you to store a seperate mesh, which is going to take up memory, even if it's a small amount. All in all, it probably wouldn't be worth the effort.

[Edited by - Toji on May 27, 2005 11:39:48 AM]

Share this post


Link to post
Share on other sites
Ok, I understand what you are saying but I still think would like to experiment a bit. I was rather considering large models (read ~17k tris). If now I have an aproxymation of the model at 2k tris, that would have the number of "expensive" tris drawn halved, I think there would be a benefit.

To test this, I would need an algorithm that produces reduced models, that don't intersect the original model. And that's what Im looking for.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this