I'm trying to come up with a way of spatially describing objects in an environment. I've read a lot of articles and discussions on scene graphs and various methods of space-partitioning (Yes, I understand the difference between the two) but I still don't feel as though I any closer to deciding upon a good method.
I've got a very large game world, (3D, but mostly spread out horizontally and predominantly outdoors) which I'm going to be stream-loading content into. There's a large range of sizes that an object can be, everything from a tiny projectile up to huge buildings (Some smaller buildings which need to be easily added/removed from the world at times as well) and most objects are able to move in some way. I was considering using a quadtree/octree approach, but there are a few things putting me off that idea.
Firstly, if the game world is huge, it's presumably highly impractical to have a root node that's the size of the world. It would have to be split up many, many times until a node that encompasses just the currently loaded areas in created. One solution I suppose would be to divide the world up into a grid of potential large nodes, each one aware of it's neighbours, which can be dynamically grouped together to form the root node, or just be able to re-parent objects from one large node to another as they cross the boundaries. This seems overly complicated though, and makes it harder to find out simply where an object is. I'm also now of the impression that quadtree/octree designs aren't very good at tracking moving objects, I'm going to be rebuilding sections of the tree every frame when projectiles are in flight, for example. With a large number of moving objects, it seems inefficient to be constantly rebuilding the structure.
Part of the problem is that I'm not entirely sure what the responsibilities of the structure should be. Everything I read seems to describe a different set of problems being crammed into apparently similar designs. Using it to rapidly cull objects outside the field of view is an obvious use, is there a method that can easily incorporate occlusion culling as well? And if I'm representing the positions and rough sizes of objects in the world, should I expand it to assist in broad-phase collision detection as well?
First of all, it sounds like you need to more clearly define the problem you want to actually solve. That's going to have a huge impact on the techniques you use to solve it. Identify the requirements first and then pick a solution - don't try to pick a solution without knowing precisely what you really need.
Secondly, not all objects have to be in the same partitioning structure. In fact it's very common for static geometry and moving/dynamic geometry to use different schemes entirely.
Ok, well I've written a list of things which need to be handled in some way. What I need some help with is determining which of these can be easily handled together, and what should probably be separated into separate structures.
Large terrain with LODs (viewed in the distance and up close)
Static objects (trees/buildings etc.)
Short-lived dynamic objects (particles / debris)
Long life-time dynamic objects (mostly characters)
Broad-phase to generate dynamic/static and dynamic/dynamic collision pairs
Up to this point I've been organising the world in a simple uniform grid (where each cell is like a complete chunk of the world to be loaded), but that has a lot of limitations and it's starting to break down rapidly. It's quite a handy way of precisely locating objects in a very large world, but it doesn't play nicely with massive objects or objects that cross borders. Also, since each chunk was storing it's own terrain data, doing distant terrain is much much slower than it needs to be.
This is totally off the cuff and not at all comprehensive, but here's some thoughts to get you started:
- Find a good terrain LOD algorithm and implement it for terrain. I used to know a couple but that was many years ago and I'm sure there are better techniques available these days.
- Examine the feasibility of a kd-tree for things like static objects
- Use a bounding volume hierarchy for groups of short-lived objects
- Look into the way physics engines handle broadphase culling to get ideas for handling long-lived dynamic objects
- Keep physics separate from rendering where possible; they are very different in terms of optimal solutions for their respective concerns
- Lighting is going to depend a lot on your rendering architecture; and I'm out of date on that front :-)
Also, keep in mind that you can use hierarchical partitioning methods: e.g. store huge chunks of your world in a regular grid, and have a terrain LOD management system that renders across those grid cells. Within the "Active" grid cell (i.e. where a player is) you can load up the more precise partitioning structures for the content of that region, and so on.
Ok, thanks, that's gives me a decent starting point. I'm still not entirely clear on the reasons for dividing up static and dynamic objects, is it just because a k-d tree can provide a better fit to the objects or something?
Suppose you have 300 static objects and 5 that are moving. If you keep them all in the same hierarchy, you have to rebalance and potentially rebuild the entire partitioning set for 305 objects every frame. If you separate them into different partitioning schemes, you only recreate the set for 5 objects a frame, and the other data remains consistent and stable.