Future's scene management

Started by
6 comments, last by Ingenu 13 years, 1 month ago
Hi

I am new to 3d game engine and learned a little about BSP+PVS technique recently. I am wondering is BSP still being using in today's game, like Call of Duty 7?

I think scene management is a key to a game engine. So what kind of scene management , do you think, will prevail in the future?
Advertisement
BSP is quite slow to generate and only good for closed areas. In the future, games will be more dynamic so that waiting a second for the tree to rebuild would be slower than rendering with a dynamic method. Super Mario had cutting edge optimizations that still is considered to be advanced because they didn't have much performance to waste. Today's game developers care less about custom made optimizations and just buy a general purpose occluding method that is good enough.

BSP is quite slow to generate and only good for closed areas. In the future, games will be more dynamic so that waiting a second for the tree to rebuild would be slower than rendering with a dynamic method. Super Mario had cutting edge optimizations that still is considered to be advanced because they didn't have much performance to waste. Today's game developers care less about custom made optimizations and just buy a general purpose occluding method that is good enough.
I think that mobile device and console game programmers still use specific scene optimizations (like BSP), and that larger studios implement their own occlusion code rather than buying a pre-made package.
[font="arial, verdana, tahoma, sans-serif"]Some introduction for my opinion.
I started putting effort in game development as a Quake 1 mapper. As such, I followed pretty closely the evolution of BSP implementations dealing with Id Tech.[/font]

Long story short: I really cannot figure out how they can be still around. Granted, they are good for certain situations - certain situations - but as soon as geometric complexity goes up, they run amok.

Details follow - feel free to skip those.
BSP building will often result in geometric splitting. Increasing the vertex count could result in more splits, and each split would have become more expensive (adding more extra vertices).
The Q1 BSP compiler was really weak. They somewhat improved it for Q2 and improved massively for Q3 (so count how much effort Id spent in building their BSPs nicely) - that's the whole point. The data structure itself is not a guarantee of performance. Masterpieces as Small Pile Of Gibs Quake 2 map (often called "ascension") showed this to the masses: twice the AI count of a typical baseq2 map with perhaps 3x the geometrical detail at a comparable performance.
I said that Q3A delivered a fairly better BSP. It still wasn't enough - q3map2 developed by ydnar raised the bar consistently. It would produce a smaller bsp file containing a structure delivering more perf in less time than the original q3map. A win-win-win scenario. Maybe you have read around that BSP is a well understood data structure. Perhaps it is (by itself) but evidence suggests its interactions with real world data are not.
To let BSP scale, one must really spend some effort in building the content and the compiler. Quake BSP compilers dealt with this in two ways: better splits and more portalization (very often user-assisted). If you open a publicly available q3map you see it's full of "hint" brushes. Proper use of geometry has always been the key in helping BSP scale.

Now BSP+PVS (as used up to Q3A, I'm not sure about Id Tech 4). PVS is by far, too much fine-grained on modern architectures. Perhaps it is adeguate for some portables, but on discrete video cards (and even some integrated chipsets) it is simply outdated. The cost of rebuilding the indices, the vertex vectors and sending them to the GPU is too great compared to just wasting some vertex/fragment processing.

The fact that "BSP is adeguate only for closed areas" is, in my opinion, somewhat of a false myth. BSP performance in ID tech is independant from extensions but heavily dependant on complexity. Large areas would expose more detail, resulting in more splits and bloated PVS. Crank the poly count in a small room and you get the same problem. In Q3A the problem was avoided by (ab?)using bezier patches, resulting in larger areas with less visible BSP-level detail.


So what kind of scene management , do you think, will prevail in the future?[/quote]
Everything needing to target nowaday's hardware must deal with much higher polycount. I'm betting on some sort of "volumetric" data structure which could isolate polygonal complexity on each node, with emphasis on fast (rather than accurate) rejects. It would be convenient to better integrate with generic meshes from DCC tools and to help in delivering more work with less time. Hint brushes and portal entities, ideally, shall go.

Previously "Krohm"

Fast rather than accurate is pretty much the order of the day.

Coarse front to back rendering to take advantage of early z-rejection, conservative occlusion systems (either via the hardware systems or low res rendering), frustum culling of groups of objects via dynamic spacial partioning systems such as quad or octrees (depending on what you are doing).

Things are culled at object level; so if you have a model which is partly on screen you still draw the whole model and let the hardware do the job of deciding which polys/pixels get drawn.

On modern desktop hardware the more important aspect is managing your draw calls (which burn CPU time) and data changes (which burn GPU time).
There's a fairly modern scene visibility system outlined in Rendering with conviction - they threw out their BSP/PVS code (which took hours of precomputation) and replaced it with a fully dynamic system that costs half a millisecond at runtime instead.
After all these disscusions, I am afraid I still don't get it.

I know the drawbacks of BSP(only for the static and closed scene). However, Unreal Engine is still using BSP, right?

Is there a solution for scene management that is widely believed better that BSP+PVS?

I just want ot make sure that BSP still have a future before I dive deeper into it.
I would like to point out that Scene Management and Visibility/Culling are two different things.
The first is usually a way to uniformely manage scene entities, and often handle hierarchical animation. (Typically a sword in an hand.)
The second is about finding the best algorithm to find out what's inside a convex area.
There are many solutions to that one, and an engine could be abstract enough to replace one by another to answer the specific needs of a game.
(A RTSG would not have the same requirements as a FPS.)

Popular systems are Quadtrees, Portals/Sectors...

BSP are not about culling anyway, they are about finding where you are in the world, it's the PVS that does the culling, and in Doom 3, the BSP is used along with Sectors and Portals.
(You can just download Doom 3 demo and open the files they are in text format. They call Sectors "Areas" AFAIR.)

Today you will most likely end up with a quadtree, occlusion culling and maybe portals/sectors (if you mix indoor/outdoor with complex indoor), which is pretty much what Phantom already said.
-* So many things to do, so little time to spend. *-

This topic is closed to new replies.

Advertisement