Why not? Isn't this the main (or first) reason BSP was invented - to allow for fast back-to-front traversal of arbitrary geometry based on camera position, without having to sort the geometry every frame?
BSP’s were useful in the days when geometry wasn’t so complex.
If your goal is accurate rendering, you will cut triangles, increasing the triangle count, and slice geometry into multiple parts, increasing the draw-call count.
Even if you build a BSP tree for static geometry, geometry these days will be so heavily subdivided you will definitely create slithers and you will have so many draw calls it simply won’t be worth it to both traverse (which is in itself slower than simply running over a
pre-sorted list) and draw through the BSP tree.
But for static geometry, for which the BSP can be built offline, aren't they still the best choice?
I said “
pre-sorted” above because any intelligently designed render-queue will take advantage of frame-to-frame temporal coherence.
From 1 frame to the next, objects will almost always have the same sort order.
The way you populate the render-queue will be consistent from frame-to-frame, and since you don’t sort the actual objects in the render-queue, but rather the indices to objects in the render-queue, you can keep the indices in sorted order from last frame, and use an insertion sort to go over the already-sorted list in O(n) time. Objects usually only change orders between frames 1 or 2 at a time, so the sort has virtually nothing to do.
After that you draw items by simply running over the array.
There is virtually no way a BSP tree can beat such a system. The simple act of
traversing a BSP tree incurs many more draw calls, dot products, branches, and an explicit stack (to avoid recursion).
A BSP tree is only useful if you need accurate results and performance isn’t an issue. A correct render-queue will always be faster, but there can be artifacts. We are talking about a first-person-shooter here, not a glass-house simulation, so a render-queue is the appropriate solution here.
Besides that, if we are talking about Direct3D 11 or OpenGL 4.0, compute shaders and order-independent rendering would be better than BSP trees.
I know my questions are touching on some really basic stuff here, but I can't seem to find a straightforward answer to this. Some real life example of a scene containing some 3D objects that is partitioned step by step into a BSP tree would seriously come in handy.
Look at the link I posted.
http://en.wikipedia.org/wiki/Binary_space_partitioning#GenerationNotice how it divides D into D1, D2, and D3, and also divides B and C?.
If you don’t understand how to render back-to-front, you probably can’t handle stable division of polygons.
Even if you can, the fact that you have to slice polygons should be a red flag for you for 2 reasons:
#1: It’s not stable on computers. You will create slithers and these slithers could cause unstable slicing elsewhere etc.
#2: You are creating more triangles to render. This is bad. Sometimes you will have an entire draw call just to render 2 or 4 extra triangles sliced off the edge of a bigger polygon. This is the worst way to use a graphics card.
Not only is a render-queue easier to implement, it will provide faster renders with no extra geometry.
You should be able to find resources on render-queues via Google.
L. Spiro