Sign in to follow this  
TheChubu

Are graphs a good way to manage big terrain scenes?

Recommended Posts

Hi! I was reminded from another post about the nif file format, which as far as I have read, its a file that directly represents a node in the engine's scene graph.

 

I thought it was a very nice idea. Make a scene graph, serialize the nodes and get your own file format for "free". Add some fast decompression algorithm if data is too big.

 

But suddenly it seemed not so simple once you start to take in account all the different things the nodes could hold, how to effectively traverse the graph, the kind of relations the nodes would have and so on.

 

So my questions are, there is a better way to manage scenes? How do you deal with the different kind of nodes you might need? How an object in charge of drawing the scene would approach the graph? What issues should I expect from using a graph to manage a scene? Could I use a "simpler" structure like a tree instead of a graph? (it sounds easier to manage in my head)

 

If you need more context, my idea was to generate a big heightmap out of noise (and hopefully some hydraulic erosion algorithm), subdivide it into smaller tiles and have those tiles as my current scenes (say, generate an 8k by 8k heightmap, subdivide it into 256x256 tiles, store them as scene graphs/trees). But that's just an idea on the air, I don't have anything else besides the base heightmap generation.

Share this post


Link to post
Share on other sites

Just to nitpick a little - trees are graphs. Acyclic directed graphs to be specific wink.png Regarding your question, it is probably not a good idea to stuff every kind of information you need into a single data structure. Rendering requires a different graph (tree) structure and data to be efficient than path finding or collision detection. Your initial scene graph may contain all the required data, but for real time use it's best to split that data up into separate layers (render nodes, path finding nodes, collision information, objects, etc.) and data structures that are optmised with regards to their respective usage.

 

Yet another thing to consider is the way your data will be used in general. A streaming file format and data structure has different requirements from a static graph that is loaded once (like using zones for larger patches of terrain). A http://en.wikipedia.org/wiki/Quadtree might provide a relatively simple way of subdividing larger scenes into smaller parts. It can be pre-calculated in an offline step so your heightmap can initially be quite large. By storing the tree in a linear fashion to a file and using an index into each level of the tree you can then easily locate and deserialise the data you need at runtime if you need to.

 

HTH

Share this post


Link to post
Share on other sites

I should have mentioned that the 256x256 tile represents a leaf node on a quadtree :D

 

The quad tree would hold the terrain (and different lod levels), the leaf node, highest detail level, would represent my current scene. That would hold a graph/tree with more specific things like objects, lights, and so on.

 

A quadtree was the first option but I have looked around for other terrain structures too.

 

Anyway you're right, I hadn't considered other kind of things I'd need to do with the scene besides the rendering. All systems (like you mention, collision, ai, etc) accessing the same structure wouldn't be a good idea it seems.

Share this post


Link to post
Share on other sites

not personally all that familiar with NIF.

 

 

IME:

graph/tree structures seem to be best suited for a certain size-range, namely with an upper limit of "amount of stuff that may be present in the visible scene".

 

much past this point, and a uniform grid seems like a better solution.

above this point, basically, the trees offer no obvious advantage, but a lot more management overhead.

 

whereas a grid is much simpler, and allows more easily loading/unloading parts of a much bigger world.

 

 

for example, my engine uses a uniform grid for everything much past around 0.5km ("regions"), but may use BSP-like trees for organizing stuff within these regions (note: regions also contain things like voxel terrain, which is basically organized as a uniform 3D grid of "chunks", each of which is a uniform 3D grid of voxels, and with each chunk possibly having associated mesh geometry).

 

granted, this is for a first-person vantage point, and currently the player can only see about 256 meters in any direction, meaning that usually only at most 4 regions will be visible by the player at any given time. different gameplay styles may be different.

 

if I were doing it now, I might actually consider smaller regions, namely 0.25km, but this would be mostly so that regions could (cleanly) be perfect cubes (16x16x16), which when mixed with chunks, would mean: 16x16x16 chunk within 16x16x16 region = 256x256x256 voxels (meters) per region. (the change itself would be trivial, but currently would break compatibility with my existing region files, essentially requiring a full reset of my test world...).  (easier would be 32x32x32, but this would waste a lot more memory... which as-is, is a tight resource for a currently 32-bit engine...).

 

then again, I also had idle thoughts of if there were a "metric cubit" of 0.5m, ...

 

 

otherwise:

actually, in my case, there is a client/server split, and each end has their own copy of the scene.

 

server-side scene:

    voxel terrain (persistent / canonical);

    entities (used for game logic, AI, physics, ...).

 

client-side scene:

    more voxel terrain (currently non-persistent, streamed by server);

    "client entities", which mostly just hold information about origin, rotation, model-name, ...

        these are streamed by the server and represent a small subset of the entity fields.

    static light sources (streamed by the server);

    ...

 

note that dynamic light sources exist, but are typically generated directly from client-entities, based mostly on effect flags (flags are used mostly to indicate the color and intensity of an emitted dynamic light source, as well as other effects like if it leaves a particle trail, ...).

 

the client code basically has its own view of the scene, but then tells the renderer about whatever seems relevant rendering wise (managing things like light-sources and particle emission, ...). there is partly a split as the client basically has a "high level" view of the scene (models represented via names, effects via flags, ...), but the renderer is more concerned with more concrete representations (like being handed renderable models).

 

the renderer has its own view of the scene:

    "modelstates", basically, representing instances of potentially visible models (as per the client code);

        they may also hold some amount of "state" for the models, like their current VBOs, bone positions and calculated vertex positions, ...

        position is represented via a transformation matrix, ...

    light sources (static lights, dynamic lights, ...);

    ...

 

 

some of this is because each part of the process needs different information.

like, stuff relevant to AI or physics generally isn't nearly as relevant for rendering, ...

Edited by cr88192

Share this post


Link to post
Share on other sites

Quadtree - do you have some LOD mechanism that makes having many levels of the quadtree necessary??

 

Would a 2 level system be simpler - an array of zones (file indexes when it all doesnt fit in memory) nd the zone nodes themselves

 

The array of index(+pointer) loads and stays in memory and zones are loaded as needed (the moving window)

 

You can go straight to a zones data without going down the trees constantly and adjacencies on the array are just increments....

Share this post


Link to post
Share on other sites

Lots of good information! Thanks!

 

Sorry I didn't answered before, got sidetracked by this paper http://www.ai.univ-paris8.fr/~amsi/articles/fbelhadj_200512.pdf Pretty interesting way to construct fractalish terrain out of pre-generated rivers and ridges. Sadly I don't know about the math being used (saw gaussian distributions on a Statistics course but I failed it >.< ) so I was searching around for it.

 

Anyway, the idea is to have a quadtree based LOD system, though I'd need to try it first to see how many detail levels I need (maybe 3 is enough). Anything that isn't a leaf would be considered "dumb" terrain ie, no physics, ai, nor things moving in there. Instead leaf nodes that are loaded up close to the player would also link to their respective "smarter" contents.

 

I've read up about various methods to dynamically create different LODs for terrain and I have some papers around about scaling down RTINs (Right-angled triangular irregular networks) with quad trees and binary trees (others like ROAM 2.0 are too complicated for me to understand and use several data structures for planetary-wide rendering, overkill for what I want to do).

 

 

That is what I wanted to use. Anyway, now I see maybe it would be better to separate it in two structures. Whatever I use for LODing the terrain (quadtree for now) and another structure for holding per-scene data. Probably a grid like cr88192 said. So I "ask" the terrain structure how to render the terrain given the current player's position and I "ask" the scene structure about what else has to be rendered given the current player's position.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this