Efficient backface culling
I'm quite new to 3D programming, so stop me if you've heard this one before!
Standard backface culling iterates through a set of polygons, calculates the view vector, and calculates the dot product of the the view and normal vectors.
I am looking for a way to remove the requirement of iterating through all of the polygons and returning all front-facing vertices in a single operation. I have come up with the following incomplete solution:
Each vertex object has a pointer to the polygons it is a member of. The vertices are stored in a sorted map (angle->vertices). If we take a 2D view, you can see that if you are looking from 0deg, you can get all of the visible vertices by returning a sub-map of angles between 270-90deg.
For large objects, the view vector may vary too much accross the object, so some level of error, and/or multiple vector maps, would be required depending on view distance.
The problem I am having is in finding a data structure to store the map in 3D. Standard polar coordinates have a problem of being able to see back facing normals when you look over the top if they are only just backfacing, so you can't use a simple 2 pass check.
I need an efficient way of determining the sub-map. Any ideas?
You have a creative solution, but I think in this case the standard solution is probably more appropriate - for most of the reasons that you mentioned already. This is sort of like caching the result of the dot product test, and sorting the list for random access later.
So, implementations could do just that - calculate the dot product results with a standard direction like along the z-axis. Then that could be used as your key for sorting and testing.
I could be interesting to try this out, but I think the fact that the view vector varies across one model could be an issue. You could make your submaps based on the local average of the dot result, meaning you don't need additional memory - just sort the list of vertices according to the dot result.
Hope this helps!
So, implementations could do just that - calculate the dot product results with a standard direction like along the z-axis. Then that could be used as your key for sorting and testing.
I could be interesting to try this out, but I think the fact that the view vector varies across one model could be an issue. You could make your submaps based on the local average of the dot result, meaning you don't need additional memory - just sort the list of vertices according to the dot result.
Hope this helps!
There are some papers on hierarchical back-face culling (for example). Basically, polygons with normals within a given normal cone are grouped together, and can then be culled in chunks (by culling the normal cone).
These generally reduce the number of dot products you have to make, but it implies that your draw order no longer has the same spatial locality, so you can pay a significant memory cost - the result may actually hurt or be a wash.
I am assuming this is for a software renderer? On graphics hardware, locality becomes even more important due to the vertex caching.
These generally reduce the number of dot products you have to make, but it implies that your draw order no longer has the same spatial locality, so you can pay a significant memory cost - the result may actually hurt or be a wash.
I am assuming this is for a software renderer? On graphics hardware, locality becomes even more important due to the vertex caching.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement