an average of 60/70% of objects culled away, but that's because I do quite a lot of approximations, and the culling sometimes (often :D) gets a little too conservative (I rasterize occluders without approximations, but to test for occlusions, I rasterize the screenspace bounding rectangle of the AABBs, also, the way I choose occluders isn't really good, and doesn't take into account occluders screenspace surface, so sometimes it leads to bad results, I only choose the 10 nearest biggest occluding objects, and render their occluders into the base 0 occlusion map. with better occluder selection, going down to something around 5 would probably give similar or better results...)
about pure performance, as my pipeline is in a full redesign process, I haven't taken advantage of aa full CPU/GPU parallelisation yet, so I don't get the occlusion maps for "free", but rendering the occluders + generating the HOMS from the base HOM level eats about 1.3-2 ms per frame, and there still are optimisations to make in the renderer. testing for occlusion is really really fast if you use the screenspace bounding rectangle of the AABB, and jump to the next HOM level as soon as you find a visible pixel (I don't have the timings in mind, but when testing for occlusion whatever space partitioning tree nodes volumes you have, you get some kind of dynmic pvs and can avoid testing a huge amount of objects, so in the end, there isn't that much tests anyway).
and that wasn't a trouble to implement at all. only a 2 days work to get everything working... the most difficult part is what you are trying to do: automatically generate occluders and also find a good way to select them.
v71>
quote:In an effort of mine, at that time i 've pregenerated virtual occluders as follows : i've grouped adiacent triangle and fused them until they became a polygon with a fixed numbr of vertices, those polygon need to be coplanar, at rendering time
i rasterized them on a memory buffer for testing occlusion
generate this kind of coplanar surface isn't hardly complicated
it is based on the fact that you need to find every triangle shared by a side of the current triangle you are iterating during a cycle.
mmh.. doesn't simply grouping coplanar polygons leave you with too many polys to render for complex geometry?
duhroach>
quote:During load times, We calculated an object's area (based on it's generated AABB) and set a % value (against the size of the map)to determine if it is an occluder or not.
I also wanted to generate that % value this way, but simply considering the AABB won't work very well. if you've got something like a scaffholding, typically, it will be quite big, but full of holes, so have a large AABB but a very poor occlusion factor.
duhbroach>
quote:This system allows for some interesting properties. Firstly, if an object is going to be an occluder all the time, then it needs to stay one regardless, so a pure virtual solution wouldn't suffice. At the same time, Occludee's can become Occluders at close distances, so we allow virtual occluders to be recognized too.
mmh... I wonder if that's a good way of doing things. even an object that's going to be an occluder all the time can become an occludee at some stage.
what I do currently is test for occlusion all objects/nodes that have not been choosen as occluders. I don't really see in what case treating some objects as static occluders can be useful?
quote:I'm using a pretty low grade LOD engine (in which the LOD's are created by the artist, and actually stored in the map) so I actually send the lowest LOD version of an occluder to the software rasterizer.
does this imply your LOD mesh is never bigger than the original mesh (so objects aren't seen as occluded when they are actually visible?)
and I loved Yann's example of collumns, that could be approximated as teo crossed quads, nothing to do with the original geometry, but really good occluders...
EDIT: typos
[edited by - sBibi on February 9, 2004 11:04:57 AM]