Scene Management

Started by
11 comments, last by Shael 12 years, 6 months ago
I know this is broad topic and brought up a lot but I couldn't really find an answer that explained it enough for me.
So, I'm in the process of redoing my scene management code and I'm interested in a component based approach (my old system was an inheritance based scene graph). I was wondering if someone could explain the best way go about managing a scene in terms of components. Is there still a concept of a SceneNode/Object which would then have a bunch of components like TransformComponent and RenderComponent? Where do these higher level scene objects get stored? in a scene manager as a flat list or would there be a number of lists that act upon the scene objects (ie spatial, visibility graph)?

How do you link your game objects to a scene object? In my old system I had an abstract SceneNode class which game objects could have a pointer to. In this way I could have a Player object who has a ModelNode and can update the node's transform based on the Players position , or set an animation, etc.





Advertisement
I'm just curious... Why are you thinking about switching to a component-based design when you already have a working inheritance based scene graph? What issue are you experiencing with your current implementation that a component-based implementation will solve?
Well after reading numerous posts and the Tom Forsyth article it seems a scene graph is a bad choice these days and my scene graph didn't fit the description of "[color="#1C2837"]storing render-states in the nodes of a DAG which propagate to child geometry". It also seemed most of the time everything was just a child of the root node anyway. So I figured why not get rid of all the inheritance and virtual calls and go with a component approach.[color="#1c2837"]So I'm curious as to how others have removed the typical scene graph and what sort of data structures have they replaced it with and if using a component based approach how did they achieve that?[color="#1C2837"] [color="#1C2837"]Cheers

I have never used a scene graph a poorly defined scene graph and never will.
For my engines I have used a simple scene manager. There is a linear list of pointers to objects in the world stored as an std::vector<CActor *> (or such; I am actually using shared pointers but you get the idea).
When the client tells the scene manager to Tick(), it uses the linear list to update objects. Non-interactive objects or objects that for any other reason need no update can be stored in a separate list to expedite this process, but you may find hardships in traversing two separate lists when you—for any other reason—need to traverse all objects in the scene.

For rendering and physics, the objects are placed inside an octree.
I will only discuss rendering.

When it is time to render, the camera’s frustum is treated as a k-DOP and used to traverse the octree. Needless to say it visits only nodes that intersect the k-DOP.
For each object in each node, bounding boxes of the objects are used to perform culling. A simple AABB vs. k-DOP test.

An array of objects in view is created (stored as a member of the scene manager to avoid per-frame reallocation).
These objects are then informed they are about to be drawn. If CPU skinning is used, it would be done here. The objects can also use this call to select low-poly models if they are in the distance, etc.

Linearly processing the objects to be drawn, I ask them each to submit data to a render queue that the render queue can use to sort for best rendering performance.
This will happen several times eventually, because I will ask them to submit data for an ambient pass, lighting pass, shadow-mapping pass, etc., but for now we are still on the ambient pass.
Additionally, each model is composed of several meshes and each mesh may have multiple “render parts”. Each part of each mesh of each model submits a render queue packet.

The render queue has two lists. One for opaque and one for translucent.
It sorts them, runs over the “render parts” to be drawn in sorted order, telling each that it is now time to render. The render parts set their own index buffers and vertex buffers and shaders. Some people have the render queue set all these things, but in order to do that you would have to submit more data to the render queue, and I prefer the flexibility of giving the parts one last chance to do something before being rendered. It has helped me in debugging greatly.


Render states, textures, shaders, etc., should all have wrappers that include ID values (buffers, textures, and shaders) or local copies of the values you sent to the graphics API (render states) in order to forcibily prevent redundant state changing.
In this way it doesn’t matter that the models are setting their own shaders, and the work of the render queue can be heavily lightened and more stable. If the only way redundant states were avoided was because the render queue itself was keeping track, the render queue would be bloated and prone to failure when states were not initially exactly as the render queue expected.


As you can see, there are many systems playing together. Scene graphs are a mess and difficult to maintain, and often try to overstep their boundaries.
Keep things grouped logically. Have one system for each job that needs to be done.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Thanks for the lengthy response! I think I'm pretty well ok with handling the low level render item submission. It was more the higher level management I want to refine. In your system is your CActor object similar to your typical SceneNode base class where you have child/parent relationships and derived classes?

My current thoughts are to have a top level Scene Manager object which would contain an array of shared pointers to every object in the scene (like you have mentioned). These objects would then have a Transform component, and optionally Physics and Rendering components too. The physics component would essentially wrap up some kind physics/collision representation of the object (if we're talking about NGD then a NewtonBody) and would update the Transform component. The rendering component in most cases would contain some object that would submit render packets to the render queue. So I guess this scene object is really just a container of components that are used in different systems.

Out of curiosity how does your physics system fit in?

The other thing I'm curious about is whether there should be multiple spatial graphs. If you want your game world to handle some terrain and some outdoor/indoor buildings you would surely need multiple spatial partitions/graphs to handle this or would an Octree actually handle this fine as long as it's all static geometry?
In my current organization, which is handled different ways by different people, CEntity is the bottom class and it implements a parent/child system.
CActor adds transforms, both local and world, to this (and as of 30 minutes a unique actor ID). Some people merge these into one class, some people put both on the CEntity, etc. It doesn’t matter how you decide to do this but that is the basic idea.

The orientation class keeps a set of dirty flags so it always knows what has been changed on it.
The actor’s local transform is built from the data in its orientation class and also keeps a dirty flag.
There are two functions:
COrientation & CActor::Orientation()
const COrientation & CActor::Orientation() const


The non-const one sets the dirty flag for the local transform.

My scene manager calls Propagate() once per frame. Propagate causes the actor’s local dirty flag to dirty its children’s world dirty flags recursively.
After that, simply requesting the local or world transform from an actor will cause its matrix to be rebuilt if it is dirty. It works out so that these matrices are built only once per frame and only if needed.

Everything else is built on top of actors. Model instances, lights, cameras, etc.

What you call a scene node, I would call an octree node. Anything that requires a physical presence in the world also derives from COctreeNode. Cameras and directional lights do not, but model instances and particle sets (mind you not individual particles) do. So do chunks of terrain if you are using chunked terrain or GeoMipMapping. I use GeoClipMapping so I have no terrain chunks.



Your second paragraph is basically how it goes.



I haven’t added physics to my current engine, but in the past I used a BVH for the static world. I was handling collisions on contact so I only needed to check objects in the same octree nodes against each other.
It was fast enough to be used in a 3D iPhone game at a steady 50 FPS, and that was a low-end iPhone from 2 years ago.

The downside of on-collision checking is that high-speed objects can pass through each other. If you want to prevent this, swept volumes work well, but you can’t just check objects in the same octree nodes anymore.
So one way around this is to use a second bounding box which is also swept (that is stretch the bounding box to encompass both where it is now and where it can be at the end of the frame) and add that to the octree. Then you can once again test only objects in the same node.
Another way is to use a second octree at a lower resolution. This still allows some objects to pass through each other, but less frequently. In exchange, it is both faster and easier than sweeping bounding boxes (though “faster” may become “slower” if there are too many objects in the same node).
Octrees take a lot of memory, so instead of making a whole new one, you can just put a limit on how deep the collision bounding boxes can be inside your main one.

Remember to use bit hacks to make updating your octree instantaneous rather than through a search.



You can and should mix types of spatial partitions. The entire building goes into the main octree. The insides of the building could be a BSP, BVH, or even another octree. You can use portals etc.
It would be an error to add all of the parts of the building to the main octree.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

I wrote a series in 2007 on that topic.
http://www.beyond3d.com/content/articles/98/
http://www.beyond3d.com/content/articles/102/
http://www.beyond3d.com/content/articles/109/

I've been meaning to write another one as some of the things in there aren't really good ^^
A good thing is the strict division in Scene Tree = Hierarchy (usually ends up very flat, just weapons in hands), Spatial DAG/Tree (Octree, Quadtree...), Rendering Queue(s) (of Renderables).
They are different acceleration structures each aimed at a different goal. The Scene Tree is meant for hierarchical animation/transformation, the Spatial DAG/Tree is meant for quick "intersection" in 3D [finding what's in the view frustum], and the Rendering Queue is meant to sort your data for optimal submission to the GPU.

I also only concentrate on 3D rendering, because physics requires another representation of the scene, so it doesn't belong there.
The physics engine would update the 3D engine representation of the scene (either push or pull in Multi Threading, pull is better.)

You'll probably ask yourself a couple questions about what should be a SceneNode or not, Joints are funny buddies for example, depending how you handle animation they might or not be part of the scene hierarchy.


Another critical thing to remember is that you want to inherit interfaces (ie polymorphism) and not implementation, C++ pushes for the latter which isn't really good.
(So your SceneNode class will likely have something like getLocalTransfromation, getWorldTransformation and setLocalTransformation but you might prefer them virtual depending on your design.)
-* So many things to do, so little time to spend. *-
@YogurtEmperor

It kind of sounds like your CActor and CEntity objects are part of the game code rather than the engine? If not then maybe I'm a bit confused with your object naming..

In my mind an Entity is the lowest level game object and can contain a SceneNode. The SceneNode is the engines representation of basic world objects such as cameras, models, and lights. This object could be placed within a spatial structure such as an Octree for visibility culling.

This does mean however that the game objects would require some extra management to update their respective scene node transforms if the game object moves or if you want to attach a camera to follow a particular game object.

With physics I'll be using an external library such as NGD so it will have its own internal tree and I'll just have some kind of higher level physics component that wraps NGD's objects.

@Ingenu

I had read those articles some time ago but I'll have a look again for some ideas thanks.

I don't really plan on having animations handled via the scene hierarchy, they'll be managed via animation controllers which models will contain along with it's meshes and animations. I also sort of wanted to get away from huge inheritance hierarchies and virtual calls with recursive updates like you have with your usual scene graphs, but I'm yet to work out a good way of doing that. Maybe transforms could be factored out into it's own component and it could be used to build a transform hierarchy instead.

I dunno hopefully someone could shed some light on the component based approach from my original post?

But anyway thanks so far, it's helping me really think out a lot of things :)
Hello Ingenu. I am BadMrsFrosty.


A note on the last paragraph and C++.
Getting the world and local transforms is something that could be done very often, so it is one place where you want to avoid having virtual calls.
So in my design, LocalTrans() and WorldTrans() are inlined and not virtual, however they check their dirty flags and update their respective matrices if (and only if) necessary before returning them.
When updating, I have 2 virtual calls, one before the update and one after.
LocalTransWillChange() and LocalTransDidChange().
The upper classes can then override these instead of overriding LocalTrans().

So constantly moving objects will have 2 virtual calls per frame of overhead (which is still better than once every call to LocalTrans(), which could be 20 times per frame) and static objects such as rocks have 0 virtual calls per frame.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

@Shael

I completely agree with you about deep/big class hierarchy, fact is I don't program my engine in C++ anymore because IMO it pushes me to write "bad" code.
Note however that in those articles (I learnt a lot since then so I'd need to write a follow-up/update) I'm talking about specialized structures for given goal.
Although I said hierarchical animation I do not mean skeletal animation (now, back when I wrote it I meant it though ^^).

Basically I have a SceneNode class with ID, Parent*, Children*[], getID(), attach( ScN* node ), detach( ScN* node ) and (atm) virtuals : setLocalTransformation(), getLocalTransformation(), and getWorldTransformation(), along with update( const Transformation& t ).
I have only a few classes inheriting SceneNode : Light, Camera, Object, ParticleSystem, and Joint.
The reason why the 3 transformations are virtual is because of Joint, I'm still thinking about alternatives.



@YogurtEmperor

Calling a virtual function isn't that expensive since we have cache in CPU, obviously a normal call is less expensive which is better, unless you need a virtual function ^^

It's difficult to have a general talk given implementation details and system design will have an impact on performance.
(for example calling a virtual on a list of sorted by type objects is not that expensive)

I definetly need to write a new series about the latest development in FlExtEngine.
-* So many things to do, so little time to spend. *-

This topic is closed to new replies.

Advertisement