Entity/SceneNode Distinction in a Game Engine

Started by
4 comments, last by Nanoha 12 years, 11 months ago
Greetings,

I have been developing a Game Engine based on the architecture presented in the book Game Coding Complete. For the first time, I'm really starting to grasp the concept of having an event based system. Part of this architecture involves the concept of an entity (or actor, from the book). Every game engine defines this concept somehow, but where I'm having trouble is understanding where to make the distinction between a logical entity and a renderable object.

I know that it's important to keep render code outside of the game logic, and it makes sense to me that an entity is the logical existence of a game object, while the scene node is its visual form. There's also the physics component, that represents the physical nature of the object. In the examples I've seen, an entity has a one to one mapping to its scene node; so an object can have a MeshNode that renders it, or something like that. My question is, how do you account for other visual components in a game that don't fit nicely into an Xnode.

For example, what if I want to draw a space ship. That's great, I can make a ShipEntity that owns a MeshNode with some specific parameters. But what if I want the ship to take damage and have sparks fly out of parts of it, or scrape decals on the side, or handle weapons muzzle fire and engine boost animations. Are all of those things entities too? Do I need a MuzzleFire entity that performs its billboard animation and kills itself? Or should the ShipEntity have multiple DecalNodes/SpriteAnimationNodes/MeshNodes that are dynamically created and destroyed based on game logic? I just don't grasp how the one-renderable-per-entity model is practical. Should I have to manage an entity + scene node combo for every decal/turret/fire sprite? How is this typically accomplished?

I appreciate your insight on this issue.
Advertisement
I personally tend to think of an entity as something that 'drives' a scene node. So you may have a human model as a scene node, but without an entity, it wouldn't be able to think or move. I would recommend that you simply store your graph of scene nodes and scene nodes can store Entities as aggregates. Same with the physics.

class SceneNode
{

Entity m_attachedEntity; // can be null
RigidBody m_rigidBody; // can be null
};

I don't see much gain from making things more complicated than that

Greetings,

I have been developing a Game Engine based on the architecture presented in the book Game Coding Complete. For the first time, I'm really starting to grasp the concept of having an event based system. Part of this architecture involves the concept of an entity (or actor, from the book). Every game engine defines this concept somehow, but where I'm having trouble is understanding where to make the distinction between a logical entity and a renderable object.

I know that it's important to keep render code outside of the game logic, and it makes sense to me that an entity is the logical existence of a game object, while the scene node is its visual form. There's also the physics component, that represents the physical nature of the object. In the examples I've seen, an entity has a one to one mapping to its scene node; so an object can have a MeshNode that renders it, or something like that. My question is, how do you account for other visual components in a game that don't fit nicely into an Xnode.

For example, what if I want to draw a space ship. That's great, I can make a ShipEntity that owns a MeshNode with some specific parameters. But what if I want the ship to take damage and have sparks fly out of parts of it, or scrape decals on the side, or handle weapons muzzle fire and engine boost animations. Are all of those things entities too? Do I need a MuzzleFire entity that performs its billboard animation and kills itself? Or should the ShipEntity have multiple DecalNodes/SpriteAnimationNodes/MeshNodes that are dynamically created and destroyed based on game logic? I just don't grasp how the one-renderable-per-entity model is practical. Should I have to manage an entity + scene node combo for every decal/turret/fire sprite? How is this typically accomplished?

I appreciate your insight on this issue.
[font=arial, verdana, tahoma, sans-serif][size=2]For the quesiton about the gun mesh maybe you could ask yourselft some question like :
Does my spaceship always have this gun there? [/font]
Is this gun used on other spaceship?

If I'm not wrong, when you render a scene your visual entity needs to ask their logicals equivalent the needed information in order to draw corectly. If the spaceship have no damage then the damage animation won't show.
If you wanted so, you could make each entity possibly own multiple renderable scene nodes. However, for things that don't really depend on the parent entity after being created, like a spark effect flying off, creating a new entity that manages its own lifetime sounds like it would be more convenient, and also result in simpler & less error-prone logic.

Or, you could do like Unity does; a scene node and entity are really the same thing, but various functionality is composed into the node via components (logic scripts / sound effects / particlesystems / meshes etc.)

Personally I now like the latter approach more, because it makes explicit that whenever you want something with its own 3D transform, you've got to create a new entity. The entity hierarchy and scene node hierarchy are then the one and the same.
Also, I wouldn't have things like Particle Systems and Sound effects as entities. I would have these as stand-alone systems using a singleton to manage the interface for entities. The systems should take care of all memory handling so the clients can effectively use a fire-and-forget type initiation, just passing in a locator or model position for the particle systems to follow or for 3d hints for the sound effect manager.
I based my game on whats said in game coding complete (with a logic/view split). I've not needed to implement any scene graph myself (I let PhysX take care of that physical side and Ogre3d take care of any graphical side) but I am still using scene nodes/entities etc through Ogre.

If I want to make a missile, it has some behaviour/control, it has sound, it has physics, a visual mesh and also has a particle effect. I use a component based system so the physics is just its own component, the control component just controls it (makes it fly, tells it when to explode etc). Then finally theres a graphical component and all that graphical component does is update the Ogre scene node position/orientation based on the physical component. With respect to what you originally posted I would assemble my visual/audio parts like so (via messages):

Create new SceneNode
Create new Entity (entities are just meshes in ogre)
Create new particle effect
Create new sound source
Play looping sound
Attach entity to scene node
Attach particle effect to scene node
Attach sound source to scene node (I added some stuff to allow me to attach sound sources to scene nodes)

Now my gaphical component just worries about moving that single scene node. The setup can be a bit much (since its done indirectly through messages) but it works quite well. Scene nodes can have child nodes, which can have further child nodes.

Ogre supports attaching objects to the bones of entities, for my player this works well. I have one scene node for the player, it has a mesh, I can attach guns to the bones of that mesh. It also has another child scene node (the head), to which I attach a camera/ears.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

This topic is closed to new replies.

Advertisement