A different approach to frustrum culling?

Started by
1 comment, last by Ysaneya 16 years, 3 months ago
So there I was, roving around on the mid watch, bored out of my skull and I started thinking about programming. More specifically, my thoughts drifted towards different methods of frustrum culling. That's when I came up with a method that I've never seen before and it seems like (at least in my head) that it could possibly be a viable alternative to existing methods (unless it already exists of course). Now, most frustrum culling techniques try to figure out what shouldn't be seen and throw it out of the rendering loop, while my method tries to isolate what should be seen and keep it in the rendering loop. The basic rundown of it is: -Maintain a list of all objects in the world -Sort them by distance from the camera -Loop through the objects starting with the closest one -As soon as an object is found that can't possibly be in the view frustrum, terminate the loop The test for if an object is possibly visible would based upon the far plane of the view frustrum. Here is some pseudo-code for how it would work:

Sort objects in the world based on distance from the camera
   For each object in the list
      if (distance < far plane distance)
         if (angle between cam/object and cam lookat vectors < fov/2 or 360-(fov-2))
            Add the object to the list of objects to be sent down the pipeline
      else
         terminate loop
The resultant data set should be the objects above, in and below the view frustrum. Once you have this reduced data set, test the remaining objects against the view frustrum. Obviously, as the number of objects in the world increase, the performance hit from the sort goes up, so I imagine there would be a point where the method would no longer be useful (assuming it is useful to begin with). I was wondering if I could gather your thoughts on this. If the feedback is positive, I'd like to code a little proof of concept. Thanks for reading.
V/R,-AJThere are 10 kinds of people in the world: Those who understand binary and those who don't...
Advertisement
Would you be better off using the knowledge of camera-object distance to sort by angles instead, stopping when the angle is greater than fov/2 ? You can use the distance factor as a subsequent test if it is within the angle range. I ask because it seems to me offhand that the left,right, top and bottom planes would generally eliminate more objects by your method than would the far plane. This might compensate for the computational overhead. Sounds simple enough that somebody must've already found a reason why it's no good, but maybe not...
Quote:Original post by u235
Sort objects in the world based on distance from the camera   For each object in the list      if (distance < far plane distance)         if (angle between cam/object and cam lookat vectors < fov/2 or 360-(fov-2))            Add the object to the list of objects to be sent down the pipeline      else         terminate loop


I was wondering if I could gather your thoughts on this. If the feedback is positive, I'd like to code a little proof of concept. Thanks for reading.


My first thought is that it's very unlikely to be faster than "standard" frustum culling with, say, an octree, because of the sort.

The angle comparison looks suspicious; you're equaling the frustum to a cone (by taking the angle between the object / lookat vectors) when it really is made of 6 planes. Even if the algorithm worked (I'll soon talk about that), it'd be less optimal, as you'd have to increase the angle threshold to not miss any object, and that in turn means that you can have many objects that are outside the actual frustum, but inside the cone, and so that would be flagged as visible and uselessly rendered. Also note that the higher the aspect ratio (width/height), the worse the cone is at approximating the real frustum.

The algorithm doesn't seem to work well either. For example, you don't take the object dimensions into account at all in your angle calculation. And in angle calculations, you cannot add the dot-products linearly like you'd do with real angles.

Finally, you're terminating the loop as soon as you've found an invisible object, but it's really not hard to imagine an object that would be flagged as invisible and is close to you, while another object that is far away is visible.

Y.

This topic is closed to new replies.

Advertisement