• Advertisement

Archived

This topic is now archived and is closed to further replies.

Generating occluders from polysoup

This topic is 5123 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''m starting to implement occlusion culling in my engine. I decided for a verison of HOM and for it I need occluders. I can''t just send a few 2k meshes to SW renderer just to create occlusion map. I need a way to reduce them to something simple. All the examples of this OC method have occluders selected by hand or just provided with scene without any explanation how to generate them. So I''ve been searching for a while now to find something about this and came up emptyhanded. The best idea how to do it myself is something like "inverted" convex hull - hull generated "inside" the model. That wouldn''t be that difuclut if the models would be closed meshes. But I just can figure out how to do it with polysoup. So if anyone has any bright ideas on generating good low-poly occluders please reply! You should never let your fears become the boundaries of your dreams.

Share this post


Link to post
Share on other sites
Advertisement
quote:
Original post by _DarkWIng_


There was paper about generating occluders from polysoup. The idea is simple - build an octree from the soup until each node contains < N primitives or is small enough. Then for every node check if its faces are suited for occluders (are completely inside the mesh {BSP the poly soup to get that info} or something like that, I don''t remember.
Will try to find the paper.

Share this post


Link to post
Share on other sites
Hi i''ve worked on occlusion culling , but things didin''t go as i expected , i had an heuristic to start the occlusion proces given by the area of the fused occluders, this means that i sent every surface to a software renderer and when the are occluded was over a certain threshold the queryng process began, the bad thing was that if you want to use such method , you should have reduced set for your geometry often called virtual occluders, basically you have a simpler version of your map and use that for occlusion culling, unfortunately the process is not trivial since you should preserve every door/window to allow line of sight, i think that th emost effective way is using a simpler model , sending a certain amount of polygon using a function to rasterize complex polygon with cracks inside and when the threshold is reache start quering every other surface you send for occlusion test.
You should also try the opengl extension for occlusion culling
but from what i have read has many problems with the rendering pipeline ( bubbling ) .
Sorry for my english

Share this post


Link to post
Share on other sites
@Zemedelec: I already have octree build (or rather ABT) but getting the occluders is another thing. Please try to find that paper.

@v71: I know all that. I just need algoritem to generate virtual occluders form polysoup. And HW occlusion is out of the question. It''s just too slow for now.

You should never let your fears become the boundaries of your dreams.

Share this post


Link to post
Share on other sites
You have two possible ways to do it. Screen space (1) and 3D space (2). (1) is more accurate but costs more RT CPU. (2) is maybe more efficient, since it caches geometric informations but it''s also certainly much more complex to handle ... automatically.

I don''t think you may find a confortable "inverse" convex hull algo in general. This might be possible for closed and non transparent objects. I would think of an inversed adaptative mesh algo. Build parents recursively, verifying that when you merge 2 or more triangles you keep inside the original volume. However when you remove a vertex, you''ll have to bend separate edges to make a straight one which may be difficult in most cases. Once you have removed most faces it should be easier to modify the mesh and then split it into connex convex hulls. The guidelines are, since in the object you target two convex hulls are necessarilly separated by a convex facet, you have to test every cycle of edges, a kind of A* to hold the same "direction" and rough planarity, then test "rough" convexity, displace vertices so that this cycle beciomes actually convex, check that the mesh still fits inside the original volume (ray cast ?), then split the mesh till you only have convex parts.

I suspect the ideal algo is very hard to imagine, then implement and the processing complexity will certainly be very high. AI, Genetic algorithms ? I don''t even speak of non closed models, or partially transparent models.

Since you are using a HOM, it seems easier to work in screen space. You can use outlines as in stencil shadow or other shadow volume techniques.

My viewpoint is I would consider a production flow that lets artist add handmade CSG data (convex shapes merged). If a model is 8 hours of work, this would only add marginally to the artist work load, probably 30 minutes or one hour. It may also be useful to shape the initial model with convex parts and then refine, thus it would cost nearly nothing. This data would help for collision detection and incidently for occlusion culling. I know it''s not your initial question but surely something I plan to apply one day.

Share this post


Link to post
Share on other sites
I also implemented a software HOM, inspired by YannL''s descriptions of his HOM.
I searched a solution for automatic occluders generation for a few days, but all the solutions I could come up with were not good enough or didn''t work in all cases.
ATM, the occluders are created by the artists, at the same time as the models, an occlusion shader is bound to them, and the engine extracts them at load time. and it''s almost nothing to do for the artists compared to the actual modelling.

an idea I had would be to split the whole model in convex volumes, and then to try connecting all these convex volumes by faces that should stay inside the volumes. (you would remove volumes under a certain size, to throw away small details, and later on, occlusion faces that are too small), but this was only the rough process, not how to implement it, and implementing this: "split the whole model in convex volumes, and then to try connecting all these convex volumes by faces that should stay inside the volumes" is all but trivial, and it might not be a good solution either, so I don''t really know if it''s worth the effort...

Share this post


Link to post
Share on other sites
Thanks for suggestions sBibi & Charles B. I tought about using some kind of algoritem simmilar to progressive mesh generation or mesh optimizer. But using something like that reqires either convex mesh to start with or somehow making sure the new verison is inside the bounds of original mesh. The second method seems quite difficult to realize. I''m gonna try first to find something on generating convex volumes form mesh.

@sBibi: Was implementing HOM worth the trouble? What were the performance gains?

You should never let your fears become the boundaries of your dreams.

Share this post


Link to post
Share on other sites
I suggest that you take Zhang's approach and pregenerate covex hulls for the meshes this way your meshes are reduced to simple planes. Then you can easily project the planes to device coordinates and process them from back to front in the eye space and finally calculate the visible surface areas. You can actually do this front to back, however either way, you have to calculate the first and last entry twice.

This requires no z-buffer access, however pregenerating covex hulls can be a though task. On the otherhand if you know exactly what your objects are, then generating them is not that hard.

I have not implemented this myself, but my friend has a working implementation and to tell the truth, it is quite fast even un-optimized.

Edit: I had a link how to generate convex hull out of polysoup, but I seem to have lost. I'll update this after talking to my friend.

[edited by - captian goatse on February 7, 2004 8:12:20 AM]

Share this post


Link to post
Share on other sites
In an effort of mine, at that time i ''ve pregenerated virtual occluders as follows : i''ve grouped adiacent triangle and fused them until they became a polygon with a fixed numbr of vertices, those polygon need to be coplanar, at rendering time
i rasterized them on a memory buffer for testing occlusion
generate this kind of coplanar surface isn''t hardly complicated
it is based on the fact that you need to find every triangle shared by a side of the current triangle you are iterating during a cycle.

Share this post


Link to post
Share on other sites
I actually just went through a contest with a topic similar to this

I found the best solution was a mix between static and virtual selection.

During load times, We calculated an object's area (based on it's generated AABB) and set a % value (against the size of the map)to determine if it is an occluder or not.
On top of this, our static occluders can be supplimented with virtual occluders, which are generated during run time, by using the Screen Space Bounding Rectangle as an estimate of volume. Again, an % value here (against the screen size) determines if an object can be a virtual occluder.

This system allows for some interesting properties. Firstly, if an object is going to be an occluder all the time, then it needs to stay one regardless, so a pure virtual solution wouldn't suffice. At the same time, Occludee's can become Occluders at close distances, so we allow virtual occluders to be recognized too.

The only real problem I ran into was the fact that in some cases, we'd end up with occluders hiding eachother. (most of the time, it was virtual hiding static) I found that I could bypass this by doing a 2nd occlusion test on virtual occluders to the static ones, minimizing the amount of data sent to the software rasterizer. (althought the extra overhead on the rasterizer was minimal, so this extra test wasn't really needed)

I'm using a pretty low grade LOD engine (in which the LOD's are created by the artist, and actually stored in the map) so I actually send the lowest LOD version of an occluder to the software rasterizer.

This method isn't really a "fix all" solution, but it does find alot of middleground between the techniques, and kinda gets the best from both of them.


My Paper: Software Rasterization for Improved Occlusion Culling

Hope that helped.

~Main


==
Colt "MainRoach" McAnlis
Programmer
www.badheat.com/sinewave

[edited by - duhroach on February 7, 2004 12:02:55 PM]

Share this post


Link to post
Share on other sites
_DarkWing_: "Was implementing HOM worth the trouble? What were the performance gains?"

an average of 60/70% of objects culled away, but that's because I do quite a lot of approximations, and the culling sometimes (often :D) gets a little too conservative (I rasterize occluders without approximations, but to test for occlusions, I rasterize the screenspace bounding rectangle of the AABBs, also, the way I choose occluders isn't really good, and doesn't take into account occluders screenspace surface, so sometimes it leads to bad results, I only choose the 10 nearest biggest occluding objects, and render their occluders into the base 0 occlusion map. with better occluder selection, going down to something around 5 would probably give similar or better results...)

about pure performance, as my pipeline is in a full redesign process, I haven't taken advantage of aa full CPU/GPU parallelisation yet, so I don't get the occlusion maps for "free", but rendering the occluders + generating the HOMS from the base HOM level eats about 1.3-2 ms per frame, and there still are optimisations to make in the renderer. testing for occlusion is really really fast if you use the screenspace bounding rectangle of the AABB, and jump to the next HOM level as soon as you find a visible pixel (I don't have the timings in mind, but when testing for occlusion whatever space partitioning tree nodes volumes you have, you get some kind of dynmic pvs and can avoid testing a huge amount of objects, so in the end, there isn't that much tests anyway).


and that wasn't a trouble to implement at all. only a 2 days work to get everything working... the most difficult part is what you are trying to do: automatically generate occluders and also find a good way to select them.

v71>
quote:
In an effort of mine, at that time i 've pregenerated virtual occluders as follows : i've grouped adiacent triangle and fused them until they became a polygon with a fixed numbr of vertices, those polygon need to be coplanar, at rendering time
i rasterized them on a memory buffer for testing occlusion
generate this kind of coplanar surface isn't hardly complicated
it is based on the fact that you need to find every triangle shared by a side of the current triangle you are iterating during a cycle.


mmh.. doesn't simply grouping coplanar polygons leave you with too many polys to render for complex geometry?

duhroach>
quote:
During load times, We calculated an object's area (based on it's generated AABB) and set a % value (against the size of the map)to determine if it is an occluder or not.


I also wanted to generate that % value this way, but simply considering the AABB won't work very well. if you've got something like a scaffholding, typically, it will be quite big, but full of holes, so have a large AABB but a very poor occlusion factor.

duhbroach>
quote:
This system allows for some interesting properties. Firstly, if an object is going to be an occluder all the time, then it needs to stay one regardless, so a pure virtual solution wouldn't suffice. At the same time, Occludee's can become Occluders at close distances, so we allow virtual occluders to be recognized too.


mmh... I wonder if that's a good way of doing things. even an object that's going to be an occluder all the time can become an occludee at some stage.
what I do currently is test for occlusion all objects/nodes that have not been choosen as occluders. I don't really see in what case treating some objects as static occluders can be useful?

quote:
I'm using a pretty low grade LOD engine (in which the LOD's are created by the artist, and actually stored in the map) so I actually send the lowest LOD version of an occluder to the software rasterizer.


does this imply your LOD mesh is never bigger than the original mesh (so objects aren't seen as occluded when they are actually visible?)
and I loved Yann's example of collumns, that could be approximated as teo crossed quads, nothing to do with the original geometry, but really good occluders...

EDIT: typos

[edited by - sBibi on February 9, 2004 11:04:57 AM]

Share this post


Link to post
Share on other sites
quote:
I also wanted to generate that % value this way, but simply considering the AABB won''t work very well. if you''ve got something like a scaffholding, typically, it will be quite big, but full of holes, so have a large AABB but a very poor occlusion factor.


Yea, that''s a common problem, but the only solutionh we provided was to subdivide a mesh into AABB nodes, so that a composite AABB would actually be a tightly fitting box.

quote:

mmh... I wonder if that''s a good way of doing things. even an object that''s going to be an occluder all the time can become an occludee at some stage.
...
I don''t really see in what case treating some objects as static occluders can be useful?



A good occluder, is always a good occluder. Take for instance a large mountain range that hides a forest behind it. Even at a large enough distance at which the mountain would not be chosen as a virtual occluder, it still would be a good choice, due to the forest it hides. Having a way to pre-determine static occluders will minimize the chance of rendering that forest.

quote:

what I do currently is test for occlusion all objects/nodes that have not been choosen as occluders.

Well that''s essencially a double pass occlusion test, but still gets into problems as you decide what your "first pass" occluder objects are. Mainly because if you choose an object as a base occluder, that''s hidden by another object, you''re really not gaining anything.

With my implimentations i don''t take into account occluder vs. occluder action, and I have yet to see a substancial performance hit (or one at all really)

~Main


==
Colt "MainRoach" McAnlis
Programmer
www.badheat.com/sinewave

Share this post


Link to post
Share on other sites
Thanks for some great sugestions. Right now I'm searching for some polycount reduction algorithems. I'm hoping to be able to modify one of them, so I can create some virtual occluders automaticly. For now I'll just try static occluders. Like the closest 10 or so.

You should never let your fears become the boundaries of your dreams.

[edited by - _DarkWIng_ on February 10, 2004 6:28:09 AM]

Share this post


Link to post
Share on other sites
Here's a good link.

IMO, frustum culling with convex hull edges seems like the thing to do, to me.

-------
The problem with common sense is that it is never common.

[edited by - Countach on February 10, 2004 10:16:03 AM]

Share this post


Link to post
Share on other sites
Occluder simplificatin actually depends on what data you need in your occlusion map. If you''re doing a standard HOM, then your occlusion map just contains boolean data if a bit was drawn there or not. In this case, you can preform a occluder mesh simplification by projecting all occluders to the near plane and doing an mesh-reconstruction on the points. From there, a LOD implimentation on the resulting mesh will result in a very small number of polygons.

If however, you''re using the depth buffer approach that Yann talked about, you can''t construct that sort of mesh, due to the loss of depth information you get from the projection, and the LOD calulations. Any sort of combination of data, with an LOD will change the depth values of a given occluder, which will alter how your tests work. The only suitable way I found to reduce the load on the SWRast while maintaining the depth information was to just send the lowest LOD of the occluder to the pipeline.

~Main

==
Colt "MainRoach" McAnlis
Programmer
www.badheat.com/sinewave

Share this post


Link to post
Share on other sites

  • Advertisement