# Implementation of Scene Graphs, Spatial Trees, Culling, Bounding Boxes and more...

This topic is 3121 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hey there, I seem to find the concept of scene graphs and other sorts of hierarchical trees rather confusing, so I thought to post for some guidance here on gamedev. I haven't really got a specific question, I'd just like someone to explain me the matter of the topic or give me some direction. I've searched gamedev for some information - and there sure is a lot to be found (Yann L's ABTs, etc.) - but none of it how to actually implement this stuff. So, let's see where I'm currently at. By all means, correct me if I'm wrong. As far as I know scene graphs and spatial trees aren't the same thing, right? They're both a hierarchical collection of data but the difference lies in the fact that SG's are ordered by logical connections in the game world and ST's by, well, space. Does that mean SG's shouldn't be used for culling, collision checking or anything like that? If not, that would mean I'm supposed to create more than one tree of my scene. A scene graph for logical...yeah, what? If data is ordered in a spatial manner, what would I need a Scene Graph for? Or am I supposed to put my transformations in the SG (which seems highly unlikely, because that hasn't got anything to do with game logic, and more so with space). Another thing I don't get is the concept of a spatial partitioning tree itself. Or at least, I think I understand the use of BSPs, Quad-/Octrees, ABTs, and so on and so forth (speeding up culling), but not in combination with e.g. a VBO. When computing a spatial tree there is a possibility you'd have to splice up polygons, right? What if a mesh I've loaded in a VBO crosses the border of a ST-node; the idea of vertex buffer objects is that you load in a bunch of data and send it to your GPU in one go, for it to remain there. When I'm supposed to splice up vertices VBO's get rather obsolete then, right? I noticed a lot of people say spatial trees should only be used for static geometry, but e.g. Yann L mentions his ABT could be used for dynamic geometry just as well. And if not, what do current engines use anyway? I'm confused, someone please help me! - Stijn

##### Share on other sites
Scene Graph:
Basically a tree structure where objects can have children that move in their parent's space, i.e. relative space. So you can have a root node that has a child called Mesh0, this Mesh0 can have 2 children called Child0 and Child1. Now Child0 and Child1 have a position that is relative to Mesh0. So if Mesh0 moves around, Child0 and Child1 move along with it accordingly. This is useful for many things, like for example: a soldier holding a weapon. The weapon is then a child of the soldier, moving around in the soldier's model space. Or for animation, i.e. moving a robot arm at the base causes all other subsequent parts to move around as well. Although it's still not light on the calculating side of things, it is handy-dandy for the programmer as he/she only needs to transform one node instead of many.

Spatial trees:
This is where 3D (or 2D for that matter) space gets divided up into sections. This is done using several algorithms or methods, for example quadtrees, octrees or kd-trees, Bounding Volume Hierachies, grids etc. http://en.wikipedia.org/wiki/Octree as an example. If you want more examples, use the Google Luke! Again, say you have your scene. This is one cube. Now this one cube is divided up into 8 (hence, octree as in octal) sections, these sections are the children of the one cube. These children contain objects (i.e. models) and they are called "leaf nodes" because that is where the hierarchial structure ends, just like with leafs on trees.

Culling:
This is where you take the contents of a leaf node and check if it's within the view frustum of the user's camera (using a simple dot product), if it's not... don't render the contents of the leaf node. You don't have to use the leaf node of the spatial tree, you can also do this per-object, but using leaf nodes you can batch objects.

Bounding boxes:
This is where you take the min and max position of the points of a model and create an (invisible) cube around it. You use that cube to do collision detection (object vs object, or you can check if said object is still within a certain leaf node)

But.. couldn't you have used Google for this?

btw, going from your questions... HKU student?

##### Share on other sites

Nonetheless: yes, I could have used google for this. And I already had. As much as I appreciate your reply, I understand the concept of all these trees. The problem is the implementation (for which google, frankly, doesn't yield that many results).

Coding a SceneGraph doesn't seem too hard (using c++, btw). Create a node (abstract base) class, a std::vector with children and a pointer to its parent, etc. Problems arise for me when I think about combining this with a spatial and other trees (or, whether that IS actually the right way to go).

As it is now, I've got a (static) Mesh class that, when calling loadFromFile() reads in a binary meshfile, places them in VBOs and returns the VBO-indices (OpenGL). That means my data is either already there, or being transferred to the GPU. How can I do processing on polygonlevel - like splitting up faces - when I'm supposed to put it in a VBO? Should I be using VBOs in the first place, or are they deprecated or so?

And also, yes I'm an HKU student. Unexpectedly though, I study Composition for Adaptive Systems there. Music. Game desing/programming is just a 'hobby' for me, as it is for a lot of us here on GameDev ;)

- Stijn

[Edited by - stenny on May 1, 2010 5:21:15 AM]

##### Share on other sites
Quote:
 Original post by stennyCoding a SceneGraph doesn't seem too hard (using c++, btw). Create a node (abstract base) class, a std::vector with children and a pointer to its parent, etc. Problems arise for me when I think about combining this with a spatial and other trees (or, whether that IS actually the right way to go).
That depends on how you understand the term "combining". As mentioned in your OP, using a specialized spatial structure for e.g. collision or vicinity queries is better than to use the scene graph for this purpose (IMHO). I understand a scene graph as a logical description of a scene. A representation of a resource (e.g. a mesh is a resource) needs to be added to the scene to become visible at all. Lets say you have an entity that uses a mesh resource for its graphical rendering. The entity is added to the scene graph. The entity also provides a placement (location and orientation), in fact a transformation that relates the entity spatially to the world. It also provides a bounding volume. The latter both properties are sufficient to bind the entity to the spatial structure, e.g. a BSP or whatever. So an entity showing those both properties gets bound (e.g. a tree node is generated that refers to the entity) to the spatial structure when being added to the scene.

Quote:
 Original post by stennyAs it is now, I've got a (static) Mesh class that, when calling loadFromFile() reads in a binary meshfile, places them in VBOs and returns the VBO-indices (OpenGL). That means my data is either already there, or being transferred to the GPU. How can I do processing on polygonlevel - like splitting up faces - when I'm supposed to put it in a VBO? Should I be using VBOs in the first place, or are they deprecated or so?
Nope, they are not deprecated. But they are intended to speed up drawing, not modeling! In other words, they are optimized for another task than you wishes to perform. Modeling tasks usually need some topological informations that are at most only indirectly given when having a VBO. This makes the one or other modeling task much more complicated. I personally use another mesh structure for such purposes, where these structure obviously offers more topological informations. Then I convert that into the VBO friendly format just before rendering.

##### Share on other sites
Things are started to clear up. Slowly.

Separating logical and graphical data seems like a solid plan to me. A Scene Graph consists of nodes describing how the world is ordered in a logical fashion. Adding a node (or entity, as you'd like to call them) doesn't necessarily mean it has a grapical representation, only a location/transformation and volume in 3d space. As soon as you 'attach' e.g. a mesh (or light, etc.) to the entity, you update its bounding volume. Then I register the current entity to the spatial tree, you say? I take it you insert that node at a spatial leaf that's coherent in 3d space with the world coordinates of the node in the SG? I.e. this is where you separate logical data from spatial data.

---

Ok, so immediately converting my modeldata to vbo-indices right after loading a mesh isn't the way to go. My current meshclass consists of several submeshes. Each submesh is assigned a material and indiceslist. The indices, of course, refer to the vertices, normals and texcoords in the main mesh class (vertex arrays). You suggest I keep it at that and convert all data to VBOs at a later stage, right before rendering the spatial tree? This would help with sorting my rendering by material/texturestate switching, something I had been wondering about too.

Does this also mean that, when I want to render my scene, I don't call SceneGraph->render(), but SpatialTree->render()?

Thanks for helping out guys; the fog is clearing up, keep it coming!
- Stijn

##### Share on other sites
As ever, there are several way of getting things work.

Quote:
 Original post by stennySeparating logical and graphical data seems like a solid plan to me. A Scene Graph consists of nodes describing how the world is ordered in a logical fashion. Adding a node (or entity, as you'd like to call them) doesn't necessarily mean it has a grapical representation, only a location/transformation and volume in 3d space. As soon as you 'attach' e.g. a mesh (or light, etc.) to the entity, you update its bounding volume. Then I register the current entity to the spatial tree, you say? I take it you insert that node at a spatial leaf that's coherent in 3d space with the world coordinates of the node in the SG? I.e. this is where you separate logical data from spatial data.
Things are a bit more complicated. It all depends on how you want to use scene graphs. What I've called an "entity" is actually not the same as a scene graph node. A scene graph node is a construct that allows composition of a scene graph. I.e. it refers to a parental node and to a couple of child nodes. Specializations of such a node class then define e.g. a transformation, shape, ... In this sense the thing I've called "entity" may be a scene graph node especially to collect model related data.

Using a scene graph node in the sense "it can have a graphical mesh, it can have a bounding volume, it can have a world transformation, it can ..." may mean to make the "uber" node. Don't do so. Stay with the "scene graph is a logical concept" approach. Look at the "component based entity" approach which is so popular these days. It shows that scene graphs can be avoided totally, although entities can still be created by composing.

Attaching a mesh to an entity actually means to update the entities bounding volume, correct. But a mesh isn't necessary for binding the entity to the spatial structure. E.g. a trigger has both a placement and a volume, but it hasn't a mesh. Nevertheless is it to be considered in e.g. vicinity queries.

Quote:
 Original post by stennyOk, so immediately converting my modeldata to vbo-indices right after loading a mesh isn't the way to go. My current meshclass consists of several submeshes. Each submesh is assigned a material and indiceslist. The indices, of course, refer to the vertices, normals and texcoords in the main mesh class (vertex arrays). You suggest I keep it at that and convert all data to VBOs at a later stage, right before rendering the spatial tree? ...
How the topological mesh structure looks like depends highly on what you want to do. There are plenty of structures out there.

Quote:
 Original post by stenny... This would help with sorting my rendering by material/texturestate switching, something I had been wondering about too.Does this also mean that, when I want to render my scene, I don't call SceneGraph->render(), but SpatialTree->render()?
IMHO: Neither nor. When it comes to rendering, the graphics sub-system sends a query to the spatial structure just for the purpose to find all entities that (a) lie in a given volume (usually the view frustum) and (b) can be rendered (e.g. refer to a mesh). All found entities are collected as rendering tasks. Each sub-mesh usually gets an own rendering task. Then the tasks can be sorted in all manners the graphics sub-system needs. If needed the graphics sub-system converts the topological representation of a mesh into a more friendly format (and caches that).

1. 1
Rutin
37
2. 2
3. 3
4. 4
5. 5

• 11
• 15
• 12
• 14
• 9
• ### Forum Statistics

• Total Topics
633352
• Total Posts
3011483
• ### Who's Online (See full list)

There are no registered users currently online

×