Sign in to follow this  
beebs1

Scene Graph / Entities

Recommended Posts

beebs1    398
Hiya, I'm trying to design a scenegraph/entity system - I think I'm almost there, but I've come across a problem. I was wondering if anyone could make any suggestions? I realise there are a lot of posts on this topic already, and I've read through as many as I can find [smile]. What I have so far is a scenegraph, which is a transform heirarchy and BVH. I want to use it to position dynamic objects in the world, and to help with culling them. What I'm unclear on is whether it's better for the scenegraph to be composed of game entities (i.e. a graph node is an entity), or whether there should be a seperate system to deal with entities - which may reference nodes on the graph. This way the scene node would contain lower-level objects like meshes, lights, sounds, etc. Has anyone tried either before? What do you think? Thanks in advance for any help. [smile]

Share this post


Link to post
Share on other sites
Red Ghost    368
Hi,

In my own implementation, entities are used to store data and have a pointer to a controller and to a scenegraph node holding their own diplay information (i.e. meshes, sprites, ...).

When a dynamic entity is moved, it updates its corresponding scenegraph node position. In turn, the scenegraph node tells its parent node to get updated within the hierarchy.
Then during the pre-render phase, the scenegraph node is correctly culled without any knowledge on the owning entity.

To me, the scenegraph is only composed of lower level objects.

Hope that helps.

Ghostly yours,
Red.

Share this post


Link to post
Share on other sites
Sphet    631
I agree with Red Ghost.

In our implementation, which is used in our level editor, the scene graph is just a transform hierarchy with bounding volume data (AABB in our case). Each node contains a pointer to an object that implements the minimum required interface for objects at that node: Render, GetBoundingInformation, GetProperties, etc.

This lets us separate the implementation of the scenegraph from the details of each node's contents as well as provide a flexible interface for new types to be added without having to change the internals of the scene graph.

imho, The scenegraph and the entities it contains are not related beyond their spatial position.

Good luck with your project,
S.

Share this post


Link to post
Share on other sites
beebs1    398
Thanks very much for your posts.

Quote:
Original post by Sphet
imho, The scenegraph and the entities it contains are not related beyond their spatial position.


This is interesting. Which system is authoritative, for positioning entities? Do the entities themselves tell the scene graph where to position/attach objects?

Also, do you use the scene graph for things like 3D sounds? If so, how do you handle them needing to have a Render() method to be part of the graph?

Thanks again - very interesting.

Share this post


Link to post
Share on other sites
Telastyn    3777
Depends on the game. Some games it makes sense to make the render bits authoritative (usually when the playfield is the size of the screen, ala pong, pacman, etc). Usually though the game objects are authoritative and then translated based on resolution, camera position, etc. into the scene graph positioning.

Sounds (and things like events/triggers) just have a no-op render or do their updates there.

Share this post


Link to post
Share on other sites
swiftcoder    18432
Quote:
Original post by beebs1
Also, do you use the scene graph for things like 3D sounds? If so, how do you handle them needing to have a Render() method to be part of the graph?
This is a thorny problem, and depends heavily on the type of game, and how it is structured.

Generally speaking, Graphics, Physics, Sound and AI all have some concept of nodes and world geometry. Their individual views typically have substantial overlap, but they each operate on slightly different aspects.

Sometimes you can get away with using a single data structure for all 4 cases (in which case that structure is your entity), but most of the time you need a separate data structure for each, so you end up separating them out. They still must share common attributes (position, orientation, maybe a few more), but the rest of their data is distinct (model vs collision hull, camera vs listener, etc.)

Share this post


Link to post
Share on other sites
beebs1    398
Quote:
Original post by swiftcoder
Generally speaking, Graphics, Physics, Sound and AI all have some concept of nodes and world geometry. Their individual views typically have substantial overlap, but they each operate on slightly different aspects.


This is exactly my problem [smile]. Drawing a sound or trigger volume doesn't make sense, so bodging it to just return when it's rendered hints to me that my design is broken...

Could you please expand a little on how this could be seperated? Perhaps a scene node could just contain a position and BV, and the rendering could be done from elsewhere - but then I'd lose the culling, which is what the BV's where there for anyway.

Thanks for your comments.

Share this post


Link to post
Share on other sites
swiftcoder    18432
Quote:
Original post by beebs1
This is exactly my problem [smile]. Drawing a sound or trigger volume doesn't make sense, so bodging it to just return when it's rendered hints to me that my design is broken...
I don't think that is a good approach, but it has been used in plenty of games before, so if it works for you...

Quote:
Could you please expand a little on how this could be seperated? Perhaps a scene node could just contain a position and BV, and the rendering could be done from elsewhere - but then I'd lose the culling, which is what the BV's where there for anyway.
My setup looks something like this (psuedocode):
class SceneNode:
position
bounding_volume
Renderable

class PhysicsNode:
position
velocity
convex_hull

class SoundNode:
position
velocity
Playable

class Entity:
SceneNode
PhysicsNode
SoundNode

position
velocity

Each node-type is handled by its own subsystem, which tracks all relevant nodes, handles culling, updating, etc. There is also world data which is accessible to each subsystem, because they all need at least some information about the world.

If you are looking into this type of approach, it tends to be called an 'component entity system', in particular an 'outboard component entity system' (which is a little different from what I have shown here). Either one makes for a good google search.

Share this post


Link to post
Share on other sites
Sphet    631
Quote:
Original post by beebs1
Quote:
Original post by swiftcoder
Generally speaking, Graphics, Physics, Sound and AI all have some concept of nodes and world geometry. Their individual views typically have substantial overlap, but they each operate on slightly different aspects.


This is exactly my problem [smile]. Drawing a sound or trigger volume doesn't make sense, so bodging it to just return when it's rendered hints to me that my design is broken...

Could you please expand a little on how this could be seperated? Perhaps a scene node could just contain a position and BV, and the rendering could be done from elsewhere - but then I'd lose the culling, which is what the BV's where there for anyway.

Thanks for your comments.



The scene graph we're using is in a level editor, so lucky for me everything has to have Render function that does something!

But in our game system, the scene graph is only about spatial orientation - the graph only dictates transforms and the child-parent relationship - there is no virtual render function. Instead, much like swiftcoder, each of the objects is created through composition, and in each component's constructor the object is registered with the correct sub-system: audio emitters are registered with the audio system, graphics with the graphics system. The node is only used to manage the hierarchy.

Share this post


Link to post
Share on other sites
beebs1    398
Thanks - that sounds like a sensible way to do it. Then I could do something like this:


class SceneNode
{
Vector3 position;
// Also children, etc...
};

class BaseComponent;
class RenderComponent : public BaseComponent
{
// Just use the scene node for position.
SceneNode* position;
};

class Entity
{
std::list<BaseComponent*> components;
};


I like the idea of also using the scene graph as a BVH to speed up culling of dynamic objects, but I can't make it work well. Ideally I'd like to store two 'views' of the same graph - one for scene nodes (transform & BV) and one for renderable nodes (which have a render() method). Operations which don't care about rendering could use the first view, and then to draw I'd iterate over the 'render-graph'. Something like this:


class SoundComponent
{
SceneNode* position;
};

class RenderComponent
{
RenderNode* renderable;
};

void GameTick()
{
// update the scene-node view without caring about
// whether the node is renderable (SceneNode doesn't have render()).
SceneNode root = sceneGraph->GetRoot();
UpdateTransformsInOrder( root ); // propogate transforms top-to-bottom
UpdateBVHPostOrder( root ); // propogate BVs bottom-to-top

// draw the render view, using the BVs for culling - the
// nodes in this graph are all renderables
RenderNode renderNode = renderGraph->GetRoot();
renderNode->Render( camera );
}


Did that make sense? [smile] I'm not sure how the two 'views' would work in practice though - how they would share the transform/BV representation. Maybe the flyweight pattern could be used here.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this