Occlusion Techniques !!!

Started by
41 comments, last by Rompa 19 years, 1 month ago
I really don't undestand your method. Is the bouding box the node's bounding box ? What is the other two dimensions of the node side ?
This seem odd, you talk about the node's triangles to perform occlusion to .. the node itself !(?) It's really not clear...
Advertisement
yes, by 'node' i meant axis-aligned bounding box. i should have been more clear about that.

about the 'other' dimensions:

if you imagine an axis-aligned bounding box sitting in the air, and a point somewhere outside the box that represents the eye, then there at most 3 sides of the box that the eye can see at one time. these sides will be the aabb min/max in the x, y, and/or z axes (obviously, the eye cannot see both the min and max in any one dimension). since the box is axis-aligned, each one of the possibly 3 visible aabb sides will be aligned in a dimesion. so if the x min/max is visible, the 'other' dimensions are the y and z, and if the y min/max is visible, the 'other' are x and z, and so on.

anyway, the main point of the algo was to project potential occluding triangles onto these visible node sides to see if they completely covered it. the calculations reduce to 2d math due to the axis-alignment, and the dimensions that the 2d math is performed in are the 'other' dimensions of the side.
mmh that's it. I think it's not a good method compare to "classic" occlusion methods. I let you guess why ;)

by "classic" are you referring to things like octree/abt/etc. trees? the algo i described works on top of those methods. in fact, it requires that triangles are sorted such as in an octtree. it is only after a node has passed all other visibility tests that it would be subject to this one.

like the ap said, if you are looking at a wall, and behind the wall is a lot of geometry, it will all get rendered if u just use the 'node-in-frustum' method (again, is this what u mean by "classic"?). this requires artists to be more careful about the environments, or the developers to detect and handle special cases, etc.

even if the algo i mentioned is anywhere mear ideal (which i'm not sure it is) there are faster ways of doing it such as those already mentioned. however, those all involve using the hardware, and i was simply trying to find an ideal *software* solution to the problem.
I mean by classic occlusion methods, methods that use an() occlusion map(s) or portals etc.
The problem with your method is you do not reuse the informations you compute for a node process. I advise you to look after beam tree. It is similar to your method but you do not project a triangle n times. And you have the hierarchie you mention...
i assume that occlusion maps (and image space techniques in general) are commonly hardware driven, and, well, see my previous post about that. as for portals, avoiding them was one of the main things i had in mind. i designed for maximum artist freedom, minimum developer work, perfect occlusion, and low runtime overhead (basically, an ideal solution). i've got the first 3, dunno 'bout the last.

i didn't want to get too detailed, but i could list plenty of optimizations. an obvious one is that after determining that a node is visible, u will be left with the "holes" that "shine through". when pre-processing the children, the visiblity info for each child is clipped to these "holes". so, information calculated for a given node is reused in calculating it's children.

anyway, thanks for the info on beam trees. they do seem similar and i'll be reading up on them some more.
In Midtown Madness 3 we obeyed the KISS rule and did the following:

1: Artists created an object for the whole city containing occlusion polygons.
The polygons was placed inside buildings etc so it was a very crude approximation of the visibility.

2: We put theese polygons into a 2d-grid, i.e for every cell we stored the polygons that touched it.

3: When the camera transform changed (once per frame) we found all close occlusion polygons using the grid. For every polygon we calculated the volume it occluded (within the frustum). Then we selected the few polygons (up to five I think) that occluded the largest volumes. For each seleceted polygon we calculated the "frustum" it created (for a quad this was five planes).

4: Every "object", grid cell, BV-node etc we then did normal frustum culling first, if the object passed we tested it against the occlusion frustums.

In worst case scenarios were no objects was occluded the extra time for the costs couldn't be measured and in the base case scenarios the FPS went up by around 30%-40%. It's a VERY simple system but it's easy to implement and you probably already have the functions for classifying "things" against planes anyway.
Nice, thank you Anonymous Poster.
here's my idea. simple, but works much better than it looks.
i keep my scene in kd-tree (any hierarchy of aa bboxes is ok)
also i use simplest possible occlusion map (simple bitmap works ok but now i'm using something similar to s-buffer)
you also neeed 2 priority queues (binary heaps or something)
now
walk your scene front to back using first heap and sorting nodes using largest distance from camera as a key (projected on lookat vector)
now the second heap keeps what i call 'potential safe occluder set'
safe occluder is a node that can be safely drawn to occlusion buffer because it can't cause false occlusions (it's largest Z is smaller than smallest Z of all nodes yet to be tested)
now the second heap is sorted using smallest Z as a key
now the main algorith is like that

1. if there are any safe occluders in PSOS - draw them and remove from PSOS
2. test current nodes visibility (using simple coverage test)
3. if visible , draw nodes contents, add node to PSOS and recurse to children.

of course this way is not the most efficient way for safe occluder tests (more geometry will be considered visible) but full test would be much slower.

now some pseudcode:


queue q1,q2;
occlusion_map map;

q1.empty(); q2.empty(); map.clear();

void cull (void) {
node nd;

q1.Push(kdtree root);
while (q1 is not empty) {
nd=q1.pop();
if (nd->bounding box is visible) {

// if q2 contains node whose largest z is smaller than smallest
// z of curren node draw it to occlusion map
// the idea is that nd contains node with smallest Z from
// all nodes that are waiting to be drawn
// and since q2 also is sorted , first node that fails this test
// is also last one in this iteration

while ((q2 is not empty) && (q2.top->zfar<nd->znear)) {
draw q2.top to occlusion map;
q2.pop();
}

// now we test for occlusions using map
if (nd->bounding box is not fully covered in 'map') {
draw node contents or add to visible nodes list
q1.push (all nd's children);
}
}
}
}

if you want to see how it works in real life look there:
http://server.lomianki.pl/5545/docs/badziew/culler.zip
biki , your method sounds good, i've downloaded your code sample and it is faster with brute force approach rather than occlusion culling, are you using a span buffer ? time ago i was tempted to use that, but writing down the algorithm i realized that the complexity might explode in few frames, i think that this is was is happening with your code, ( don't judge me badly ),
lately i've been seducted once again by occlusion culling , i think that using a low res span buffer , might do the job , but evaluating a good occluder normally leads to an heuristic , the fact is that designers push many triangles to define worls and the trend is to increase them .
I've come up with an idea , store the geometry in a kd tree or your preferred data structure, you render the nearest nodes, let's say 10, from that node, one, you start to cull with a full hom map and bounding box testing.
what do you think ?

This topic is closed to new replies.

Advertisement