#### Archived

This topic is now archived and is closed to further replies.

# Terrain and Scene Graphs

This topic is 6024 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

The more I read the more questions I have. (But I’ll try not to spam the forum with all my questions ) From another thread on this site, I just found the programmer’s guide to the OpenGL Performer engine at the SGI site. It’s very interesting reading. You get a good idea of how their scene graph works. I noted that they incorporate terrain as part of the normal graph. It’s not a separate thing. I haven’t read the chapter about ASD in detail yet so I don’t know how it works. But one question I had right away was: do most people treat the terrain as a separate thing? Or do they add it as part of the regular scene graph the way Performer does? I assumed it was always separate until I read this. The trouble I have is that every time I read something new, I start rethinking the code again. At some point you have to settle on what you’re gonna do I guess.

##### Share on other sites
OK, I have to admit that I never worked with Performer, and have no idea how it handles terrains. Nevertheless, here some thoughs about terrain integration in general.

First of all, the terrain is a 3D object as any other in the scene. Sure, some terrains might be rendered using different methods, but it still is a 3D entity. It should be treated as such, when inserting it into a scene graph.

Now, I don''t know how you set up your scene graph, but let''s assume the classical OO approach: you have a generic 3D object/instance class, and you derive specific objects from it. Those nodes are then organized into a hierarchic tree. The tree is traversed, and each node is rendered/manipulated depending on the type of the geometry it contains (how you do that, eg. by an explicit ID or by using RTTI, etc, is not really important for the general concept).

So a terrain would just be another type of object derived from your base instance class. That way, it smoothly integrates into your scenegraph. You can insert it, remove it, etc, without worries.

Let''s say you have a base class named INSTANCE. Derived from that is eg. BASIC_OBJECT, CHARACTER, etc, and finally TERRAIN. The exact details of the TERRAIN class ultimately depend on how you represent and render your terrain. Say you have a virtual member function called ''render'', that each derived class overloads with the appropriate version for the 3D object type it represents.

In case of a heightmap terrain, your TERRAIN render function will be some kind of tesselator (that transforms the heightmap into a mesh, perhaps copying it into a vertex array/buffer), and then once the mesh is created, it will simply treat it as a normal 3D mesh (and render it that way).

You could of course treat your terrain as a totally seperate entity, and not integrate it into the scenegraph at all. But this is not a very elegant approach. You''d need special case code all over your engine, in order to treat your separate terrain entity. Imagine some shadow projection code, that will operate on all 3D objects and the terrain. If your terrain is a separate entity, you''d need two separate shadow functions (or one single, full of ugly if/else statements).

But if your terrain is included in the scenegraph (as a derived class), you have one shadow function that will handle everything. Same goes for eg. HSR or collision detection. That might save you alot of redundant code. And even more important: it will make your hierachical scene management a lot simpler, cleaner and easier to understand. This will come in handy later, when you want to include new object types, perhaps some special effect, or animation.

Conclusion of all that ? Treat your terrain just as any other entity or object in your game, from a high level scene management point of view. Of course, the lower level rendering might differ (perhaps even significantly, eg. in the case of ROAM), but a scenegraph does not deal with lowlevel details. It''s a high level, standarized representation of your scene. And that should include every single object (even lightsources, cameras, etc).

Of course, that only my personal opinion, other people might disagree

/ Yann

##### Share on other sites
Thanks for the thorough answer, Yann.
I guess the reason I get thinking about terrain as being a separate thing is because there are some characteristics that are unique to it and so there could be the opportunity for applying a heuristic here or there. For example, just using a hightmap instead of defining all points with x, y, and z coordinates is a heuristic based on the fact that terrain is generally flat. Terrain might also be treated differently because you might want to have LOD, and in that case you need it to blend smoothly from one LOD to another. This is unlike other objects which are usually either one LOD or another. So because of unique characteristics like these (and maybe because people refer to “terrain engines”), I thought some people might deal with terrain by itself, perhaps for performance reasons.

But your points are good. Treating the terrain as another object is certainly more elegant. I can see that, if you are going to treat terrain rendering differently, it would be nicer to do it at some lower level. I was sort of aware of that, actually. I guess I was just wondering if it was common to deal with terrain separately for performance reasons.

Jim

##### Share on other sites
quote:

But your points are good. Treating the terrain as another object is certainly more elegant. I can see that, if you are going to treat terrain rendering differently, it would be nicer to do it at some lower level. I was sort of aware of that, actually. I guess I was just wondering if it was common to deal with terrain separately for performance reasons.

Not really performance reasons. No engine will have a bottleneck in it''s high level object representation (or if it does, then there is a serious problem). The bottlenecks are always in the lowlevel rendering functions. That''s where tesselation, polycounts, etc become an issue. But the scenegraph is just an abstract representation of your scene, it''s nothing performance critical.

quote:

I guess the reason I get thinking about terrain as being a separate thing is because there are some characteristics that are unique to it and so there could be the opportunity for applying a heuristic here or there

Yes, of course. But the same thing applies to eg. particle systems, billboards, procedural objects, water surfaces, etc. You''d need to deal with a whole bunch of separate entities, each one with it''s unique properties. You''d need separate lighting, collision detection, HSR code for every object type you have in your game. What a mess.

That''s why the concept of the scenegraph was created. It simply is a way to bring all those different object types together, in an abstract data structure. All those objects get a unified access interface, and on the level of abstraction of a scenegraph, they are just all the same (they are all 3D entities of some sort).

Keep in mind, that you don''t need to use a scenegraph for performance critical operations, such as culling of static geometry. This is often done using an octtree (or similar), a hierarchical structure that is independent of your scenegraph. A scenegraph is primarily interesting for dynamic and/or moving objects. If you don''t want to explicitely include every static object of your scene (incl. terrain), you could just define a new abstract 3D type: octtree. And you insert the root node of that octtree into the scenegraph, which will then represent a whole collection of static objects in a single node.

That''s what a scenegraph is all about: it''s a standard way of organizing your scene into nodes, giving them a unified access interface. That way, even totally unrelated entities (heightmap terrain, particle systems, volume fog boxes, lightsources, etc) can be accessed through common interfaces. Which makes your engine easier to understand, maintain and keep it expandable.

/ Yann

##### Share on other sites
I’d like to resurrect this thread because I have more questions. I’m still trying to get a handle on how a scene graph is organized and how a terrain might fit into it. In general, objects in a scene graph are organized according to what criteria? Their location in the scene? What processes operate on the scene graph? Culling? Collision detection?

quote:
mind, that you don't need to use a scenegraph for performance critical operations, such as culling of static geometry. This is often done using an octtree (or similar), a hierarchical structure that is independent of your scenegraph.

Well, that was my whole point. If you’re going to put static objects in one tree and dynamic objects in another, then where is the unified access? How do you do collision detection on static and dynamic objects if they are in different trees? Do you check through one tree first then the other? Same with shadows?

I was looking at Open Scene Graph which is nice because it’s open source. In the FAQ documentation they refer you to an implementation of using terrain with OSG. It’s called Demeter. I skimmed through the code a bit and realized that Demeter doesn’t use OSG, rather it is a self-contained terrain engine that makes itself look like a standard OSG node. Therefore, you can create a Demeter terrain and insert into an OSG graph as one node in the graph.

This is something I don’t understand yet. As an OSG node, the terrain node must provide a bounding box. In this case Demeter’s single bounding box includes the entire terrain! The box extends from the terrain’s lowest vertex to its highest. What is the point in that? You cannot do culling since the entire terrain would either be culled or not culled. You cannot do collision detection because you would always be colliding with the terrain’s bounding box. What is the purpose of putting a terrain into a scene graph as a single node? You just as well might treat it separately?

It seems to me that the way things are organized in a scene graph depends on what operations you are trying to do on the scene graph. I assumed that you use it for culling and collision detection. In that case, you want to arrange things spatially. You want to be able to cull entire sections of the scene at once. You want to be able to only check for collisions with other objects that are in your nearby area. So you want a spatial arrangement. This would make a scene graph like a quadtree or an octtree. But in the implementations of scene graphs I’ve seen so far, I haven’t noticed any provision for arranging things this way. I didn’t see any “group node” that says “all objects in this branch are within coordinates x1, y1 and x2, y2”. So I’m not sure how things are meant to be arranged. (Sorry, I don’t know what HSR stands for)

If you really want terrain to be abstracted like all other objects, shouldn’t the terrain “patches” or blocks be individually inserted into the scene graph as nodes?

Can you explain this a bit more.

[edited by - JimH on September 19, 2002 4:37:27 PM]

##### Share on other sites
Hmm, the whole thing is not so easy to explain. OK, generally, if you have a spatial tree, then you don''t have a scenegraph. They are two different things, although they can be combined (with various tradeoffs).

A scenegraph, by definition, represents the hierarchical relationship between your scene objects. It is not necessarily spatially clustered. This of course, is primarily interesting for dynamic objects. Imagine a skeletal bone structure used to animate a human model. A scenegraph is similar, but extends the concept to your whole 3D world.

The main focus on a scenegraph is not performance gain (eg. by spatial clustering for optimized HSR (= hidden surface removal = general term for all forms of culling). Other, better suited structures are used for that (octtrees, BSPs, Kd trees, etc). The primary idea of a scenegraph is high level organization. It gives all objects types, even if they are totally different, a unified access mechanism.

Imagine the following scenario: The player in your game picks up a torch, and starts carrying it around. What parts of your scene are affected internally ? First, the 3D torch object must be hierarchically linked to the players hand. Then, the fire effect (perhaps a volume renderer) needs to be attached to the torch. The particle effect for the smoke needs to be attached to the fire effect. And last but not least (since we''re talking about a kickass, Doom3 killing engine with full dynamic perpixel lighting and shadows ) we also need to attach a lightsource and shadowmap projection system to the torch.

In this example, 6 different object types are interacting: a character model, a solid 3D model (torch), a volume renderer (flame), a particle system (smoke), a lightsource and a shadow projector. If all those entities used seperate representations, then can you imagine the chaos and mess it would be to link them together ? But with a scenegraph, it is amazingly simple: All those 6 object types have the same base class, you change a few hierarchic links in the graph, and you are done.

Now, the question remains how to combine this graph with efficient acceleration structures (for rendering, collision, etc). Each type of object might use a different approach, eg. an octree is good for a complex static 3D model, but totally useless on a particle system. But the interesting point is the unified access to those representations: the messy details are isolated in the derived object classes, from a high level point of view, you always see the same clean interface.

Example: Our player (with his torch) is moving through a village, which is on a terrain. Here is the scenegraph:
  ROOT   |   |--- terrain node (geomipmapped quadtree)   |   |--- geometry of the village (static, octtree)   |   |--- Player (deformable skinned mesh)   |      |   |      |--- Torch (solid 3D object)   |             |   |             |--- Flame (volume renderer)   |             |      |   |             |      |--- Smoke (particle system)   |             |   |             |--- torch Lightsource   |             |   |             |--- torch shadow projector   |   |--- Big bad guy behind the player (skinned mesh)         |         |--- Big battle axe in the hand of the bad guy                |               |--- Dripping blood from the axe (particle system)

OK, you get the idea.

When it comes to rendering, you start at the root node, and follow the hierarchy (by applying appropriate matrix concatenation, state changes, etc, this can be included into the baseclass and will be common to all object types). At each node, you call the render method: CurrentNode->Render(). It will always be the same, regardless of the object type (unified interface, every type overloads the virtual method to hide to gory details). This method will then peform culling on the object (ie. traverse the quadtree for the terrain, apply LODs, go through the octtree for the static village, etc). You have similar methods for collision detection, lighting, shadowing, dynamics, etc. Eg: CurrentNode->isColliding(), CurrentNode->ApplyDynamicForce(...blah...). Every object type shares this common interface.

Now, that''s the basic concept. There are extensions to that. You can merge the hierarchical scenegraph structure with the spatial structure of static geometry, and directly perform HSR on the scenegraph nodes. The efficiency of such an approach highly depends on your engine design. It can be pretty efficient with octtrees, and portals, since both exhibit certain natural hierarchical relationships.

Hope that explains it a bit better. If not, just ask

/ Yann

##### Share on other sites
Thanks for taking the time to explain that. It answers my original question and it really helps a lot. It seems to me there is a lack of information about such structures and organization, or at least I haven’t seen this type of thing discussed much in books or on the internet.

I can see how this will work. In your example, if I want to see if the player collides with the terrain, I can pass the Players bounding box to the Terrain node. Then the terrain node will search down the quadtree to the appropriate terrain patch to compare it with the Player’s location that was given. Ok.

I had thought this approach would have the disadvantage that your scene graph ends up being rather flat. So if you want to see if the Player collides with any other dynamic object, you need to compare with the entire list of dynamic objects in the Root node of the scene graph. You can’t rule out entire groups of dynamic objects that are far away. But on the other hand, you also don’t have to move those objects around in the scene graph just because they change position. And also, I suppose there will not be that many dynamic objects anyway.

In your example, I’d say that object hierarchy is the primary criteria for organization and location is secondary. For example, the Terrain is a single node and under that the terrain is then organized by location. Another different approach I had thought was possible—and for the sake of discussion let me propose it—is to reverse those priorities. What if you make location the primary criterion for organization? Let’s say we start out with a quad tree as the main structure and within the nodes of the quadtree we have scene graphs.

ROOT | |---Quad 1 (a type of group node) | |---Quad 2 | |---Quad 3 | |---Quad 4       |       |---Quad 4.1       |       |---Quad 4.2       |       |---Quad 4.3       |     |       |     |---terrain patch       |     |       |     |---house (static object)       |     |       |     |---Player       |           |       |           |---Torch       |                 |       |                 |---Flame       |                 |     |       |                 |     |---Smoke       |                 |       |                 |---torch Lightsource       |                 |       |                 |---torch shadow projector       |       |       |---Quad 4.4             |             |---terrain patch             |             |---Big bad guy                   |                   |---Axe in the hand of the Big bad guy

Maybe this is what you mean about merging the spatial structure of static geometry with the scene graph. In this organization, you can cull entire groups of objects at once, including terrain, static and dynamic objects. During collision detection, you can reject entire sections of the scene regardless of node type. If you want to see if the Player collides with anything else, you can compare against objects in the current Quad and in the surrounding eight Quads and that’s it. A disadvantage, of course, is that when the Player moves out of Quad 4.3, you have to take him out of the tree and reinsert him somewhere else.

I just mention this approach for the purpose of discussion and the relative pros and cons of each approach. I’m not arguing that one is better than the other myself. But I wouldn’t mind see a discussion about it.

[edited by - JimH on September 20, 2002 6:05:55 PM]

##### Share on other sites
Your example is exactly what I meant by merging hierarchical and spatial information into one tree.

But the approach of inserting a helper node per quadrant (or octant in the case of an octree) will result in a lot of empty helper nodes. This might or might not be a problem, depending on the size of your tree structures. You still have to visit the entire helper node hierarchy, even if there is only one little object somewhere in a deep leaf of the tree. You have to visit it, even if a branch is entirely empty.

An advantage is that you can exploit the regular grid structure of a quad or octree, to speed up the relocation of a moving object: just store a pointer to the scenegraph helper node in each quad/octtree cell. When your object enters a new cell, you can easily retrieve the position in the scenegraph where it needs to be inserted.

One problem remains: what if the object is on the boundary of two cells ? Duplicating the scene graph branch from the objects local root would be very messy. You could walk up the helper node hierarchy instead, until you find a node that holds the entire object bounding box, and insert the object there. You would lose spatial coherency, but it would still be rather effective.

But there is another (big) hidden problem with this approach: Imagine having a big world, with a pretty deep quadtree (let''s take a quadtree for simplicity here). I''ll denote each of the four sub quadrants like this:
|---|---|| 1 | 2 |---------| 3 | 4 |---------

Let''s say there are two objects, one in cell 1.2.2.2 and one in cell 1.2.2.3 (each dot refers to a hierarchical level, it will be easier to understand what I mean, if you draw a fast sketch of this on a piece of paper). The relationship between the two objects is OK, because both cells are hierachically and spatially close.

Now the first object moves by the distance of a single cell, from 1.2.2.2 to 2.1.1.1, that''s spatially only one cell away. But hierarchically, it''s an entirely different tree branch ! The regular (and fixed) subdivision create ''fault-lines'' along the division edges, that break the spatial and hierarchical coherency. Anobject can be spatially close, but lightyears away in the hierarchy. This problem can tremendeously increase your tree traversal time.

A different approach (the one we use in our game engine) are adaptive sparse binary trees. The idea is to merge an adpative spatial hierarchy into the scenegraph, but without pre-populating it with an empty regular grid of helper nodes. Instead, the hierarchy is generated and updated on the fly, as the objects move around. The advantage of that approach is, that spatially near objects will always be hierarchically near aswell.

quote:

It seems to me there is a lack of information about such structures and organization,

Yes, unfortunately. But optimal hierachical clustering is a very complex topic. You can actually find academical papers about various approaches, but those are generally very theoretical and hard to understand.

Finding the optimal scenegraph/spatial clustering algorithm is the holy grail of hierarchical HSR. The few who found it (if there is such a thing as ''optimal'') aren''t exactly shouting the solution around, that''s a fact

/ Yann

##### Share on other sites
Hi guys. I've been following this thread with great interest, as I have come upon these problems in my work as well. I was always able to come up with some solution, but it never seemed like the best one....there was always something not quite elegant about it.

JimH -

I share a lot of your concerns about scenegraphs. The choices one makes in this part of an engine are perhaps the most critical to good performance, yet the information about it is sparse at best. There are only a few tutorials out there, and they usually only cover one aspect of the many things you'd like a scenegraph to do.

So anyway, I'll try to make some small contribution here before asking questions

To me, the ideal scenegraph would do all of the following:

1) Aid in frustum culling
2) Aid in choosing collision detection pairs
3) Assist with hierarchical animation
4) Clump rendering states
5) Take advantage of frame-to-frame coherence in doing the above
6) Have an easy mechanism for moving things around
7) Be very fast

For me, 1-4 are mostly solved (let me know if you want details), I haven't thought about 5 much, 6 is my biggest current problem, and 7 is difficult to measure (compare to what?).

To solve problems 1-3 (keeping 7 in mind) I wouldn't use a system like the one Yann first proposed, for the reasons JimH mentioned. Organizing your scenegraph in such a flat way will leave you without an efficient way to do pairwise collision checks or scene culling.

Given this, I think a system like what JimH diagrammed or Yann described wrt his engine is the only way to go. Yann's method sounds a lot like that suggested in "3D Game Engine Programming" by Eberly.

Now for my questions. In a system with a fixed hiearchy with helper nodes (presumably based on some underlying static geometry, like a quadtree for terrain or BSP for indoors), it seems like you have to keep duplicate pointers to objects within the scenegraph when they straddle boundaries. Is there any way around this? It makes moving them a pain in the ass.

If you use an adaptive tree (which is what I think Yann and Eberly suggest), the problem is figuring out how you would keep your tree from quickly becoming mangled. For instance, imagine this situation: You start off with some terrain and a bunch of people on it. You go ahead and build a bounding sphere hierarchy for this arrangement, with people at the lowest level. Now, as the people move around, you have to move their bounding spheres, which may change the shape or size of their parent bounding spheres. This will eventually cause the hierarchy to shift and cause big overlaps (tangled branches) which will cause you to lose most of your efficiency in doing culling. Further, you will miss potential collision pairs if you only look in one leaf-level node at a time since two people could be in entirely different branches of the tree yet very close to one another. How have you guys handled this issue?

Thanks for sticking with the post this long

-- Zeno

[edited by - ZenoGD on September 21, 2002 2:00:01 AM]

##### Share on other sites
ZenoGD:

Actually flat graphs can have their very own benefits over deep trees, but more about that later.

The best method I personally found is the adaptive one. I can''t recommend the regular grid quad/octtree method, it just has too many drawbacks. It is the other extreme of a totally flat tree: you lose the hierarchical consistency in favor of a full spatial localization. But this generates huge and very deep trees, which are essentially empty. Also, hierarchical information (important for animation) is often broken up.

About objects straddling cell boundaries: as I mentioned above, you could simply navigate up the graph hierarchy, until you find a parent node that can hold the whole bounding box without touching a boundary. Eg. if your object straddles the edge of cell 1.2.1 and 1.2.2, then you could check if cell 1.2 can hold the entire object. However, the obvious flaw in this approach is easy to spot: in the worst case, you''ll have to navigate all up to the root, and insert your object there. Thus artificially flatting out your graph.

quote:

If you use an adaptive tree (which is what I think Yann and Eberly suggest), the problem is figuring out how you would keep your tree from quickly becoming mangled.

Yes, preventing tree degeneration is the key to truely making the adaptive approach work. There are several methods to that, but this is how we did it: each node has a degradation factor (d-factor), that states how much the node has shifted from it''s optimal shape and position. Parent nodes include all d-factors from their children. Various heuristics are possible here, eg. sum or median value. It will influence to reaction of the tree to spatial delocalization.

If an object is moved, it''s d-factor is updated, and this change is propagated up the hierarchy. As soon as the d-factor of a node reaches a critical threshold, the whole subbranch (starting from that node and including all of it''s children) is deleted from the tree, and rebuild from scratch. At this point, the rebuilt subtree is optimal again, so all d-factors are set to 0. This approach periodically renews parts of the tree that get too degenerated, and works pretty well. It also helps to minimize empty nodes: keep the tree as flat as possible (since flat trees are more efficient in terms of traversal speed and memory usage, if you have to visit every node), while still maintaining a good degree of spatial clustering.

That said, we still run into a major problem while designing our engine. The problem are the lights, which are part of our scenegraph. Since we use full realtime shadow projection on dynamic lights, the bounding box of a lightsource represents it''s maximal influence radius (used for perpixel shadowmapping optimization). But since light influence is typically much larger than the size of an object, you''ll get huge bounding boxes for lightsources. Eg, in the above example, our character would have a bounding box of around 1m*1m*2m = 2m³. The torch lightsource, however, will have 10m*10m*10m = 100m³, because it''s influence radius is 5m ! This obviously screws up the whole spatial tree.

What we did, is separate the tree into a flat part (with only hierarchical information, almost no spatial localization) for the lightsources, and a deep part for the rest. Then we interlinked the nodes in such a way, that this ''split-tree'' can actually behave in two distinct ways, depending on how it''s accessed: for the purposes of animation and rendering, it acts like one big hierarchical tree, but without spatial information. For the purpose of culling, collision, etc, it acts like two separate trees.

In the end, you''ll have to find the type of scenegraph management that suits your engine best. Little implementation dependent details (see our light problem) can sometimes make an approach totally unsuitable for your engine, even if it is considered nearly optimal in other cases.

/ Yann

• 16
• 9
• 10
• 9
• 19