• Advertisement
Sign in to follow this  

Shared data between game entity components

This topic is 4762 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been working on a design for a game engine...as of now I plan on making everything in the game world an instance of an Entity class.
class Entity
{
   RigidBody* physics;
   RenderObject* appearance; 
};

class RigidBody
{
  vector3 GetPosition();
  void SetPosition(vector3);
  
  vector3 GetOrientation();  //returns triple of euler angles
  void SetOrientation(vector3)  //internally translates to quaternion

  /* also contains velocity, force, torque, mass and other physics info*/ 
};

class RenderObject
{
    void Render()=0; 
}

An entity is instantiated as follows:
Entity e; 
e->body = new SphereBody(0.3, 0.9) // (radius, mass)
e->appearance = new SphereAppearance(0.3, RGB(0, 0, 1) ) //(radius, color)

The problem with this model is that the RenderObject also needs access to the position/orientation of the entity (to translate and rotate anything it renders). How do I share the information between the two? -Alex

Share this post


Link to post
Share on other sites
Advertisement


private:
RigidBody *m_rigidbody;

void RenderObject::SetRigidBody (RigidBody *body)
{
m_rigidbody = body;
}

void RenderObject::Render ()
{
Translate (m_rigidbody->GetPosition());
Rotate (m_rigidbody->GetAngle ());

DrawMe ();
}




++

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
i do not think it's a good idea to couple RenderObject with RigidBody,
there is no need for this

better let the Entity ask RigidBody for the data (pos, orient) and
a) give this data to the RenderObject when call appearence->Render(pos, orient)
or better
b) store the data (pos, orient) and the RenderObject appearance in a renderjobs list of the renderer, so you can optimize the objectrendering, ie. sort them via materials, .....

in conclusion, Entity should manage their physics and appeareance

bozo

Share this post


Link to post
Share on other sites
"i do not think it's a good idea to couple RenderObject with RigidBody,
there is no need for this"
====
I agree, passing a RigidBody pointer to RenderObject was the only solution I could come up with, but it seemed messy.

"a) give this data to the RenderObject when call appearence->Render(pos, orient)
or better"
====
This is a little ugly but I like it more than the first option.


"b) store the data (pos, orient) and the RenderObject appearance in a renderjobs list of the renderer, so you can optimize the objectrendering, ie. sort them via materials, ....."
=====
I don't fully understand...store the (pos, orient) and RenderObject...in what? A renderjob object? Can you explain more? Also, how would the position and orientation in the renderjobs list stay synchronized with the physics simulation?

-Alex

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
what i mean is,

in each frame to render:

do not directly render each entity appeareance to the screen when call the RenderObject render func from the entity render func,
other than that,
push the data (pos, orient, RenderObject, ...) to the renderer,
who stores the data in a list/array/renderscene/... (can be a class/struct named RenderEntity/RenderJob/...),
later the renderer can sort the list for ie. material,... and do other needed stuff, ie. calc shadows and lights for the objects, ...

so what this means,
the renderer collect all objects, lights, ..., which should be used for this frame, from the game/entity code and process that data later, ie. sort lists, calc needed stuff and does the real work via the 3d api, ie. pushes the data (meshes, lights, ...) from the internal lists to the api for the real draw on the screen

the position and orientation in the renderjobs list do not need to stay synchronized with the physics simulation, because you push your needed data for the current frame in each frame new to the renderer and clear that data on the frame start,
quake3 use this kind of handling

(you could also create a permanent storage of the RenderEntities(RenderJobs) in the renderer, and update their dynamic data (ie. pos, orient, ....) in each frame, but i like the first methode (fire and forget) more)

Share this post


Link to post
Share on other sites
I'm really new to graphics programming, which is probably why I don't understand.

"push the data (pos, orient, RenderObject, ...) to the renderer,
who stores the data in a list/array/renderscene/... (can be a class/struct named RenderEntity/RenderJob/...),"

Can you explain what you mean by "push the data"? A code example maybe?

Do you mean that the Render() function should not be in the contained RenderObject, to move up to the Entity, which collects the RenderObject's mesh + texture info, and the RigidBody's position/orientation and then...does what?


-Alex

Share this post


Link to post
Share on other sites
Hmm... it's entirely dependant on what you expect your engine to do, but in a small engine situation, it's not unreasonable to couple the render and physics objects. For my final project I created an "Actor" class that kept track of both a physics object and a renderable. In the Actor->Update function I allow the physics sim to do it's thing. in the Actor->Render function I get the position/orientation from the physics object and go from there. It worked fine for me.

Share this post


Link to post
Share on other sites
That's exactly what I'm doing too.

If you don't want to couple the RenderObject and RigidBody then have 2 pointers to the position and angle in RenderObject, and have them point to the real values in RigidBody. RenderObject doesn't need to know who's storing them as you only need to provide a function like:


void RenderObject::SetOriginAngle (Vector3 *origin, Vector3 *angle)
{
m_origin = origin;
m_angle = angle;
}



Then, in the RenderObject's draw function, use the values stored at the pointers. You only need to set up the pointers once when the entity is created and never again:


void Entity::OnCreate ()
{
m_renderobject.SetOriginAngle (m_rigidbody.GetOriginPtr (), m_rigidbody.GetAnglePtr ());
}

Share this post


Link to post
Share on other sites
I've settled on something similar to MENTAL's answer...

RigidBody derives from SpatialObject

SpatialObject contains only functions Get/Set:Position, Orientation
RigidBody has the rest of the relevant physics functions.
The orientation/position of the RenderObject component is set with Drawable::SetPlacement(SpatialObject)

Thus physics and rendering aren't totally coupled this way...but still enough to keep the code simple and use convenient.



Entity::SetBody(RigidBody& body)
{
m_Body = body;
m_Drawable.SetPlacement(body);
}

Entity::SetDrawable(RenderObject drawable)
{
m_Drawable = drawable
drawable.SetPlacement(body)
}



Thus is any part of entity gets swapped out, the position/orientation is still sync'd with drawing.

The downside here is that I can't have the same RenderObject drawn multiple times on the sceen at different positions....

Maybe a better idea would have been using a simple scene graph, with Transform and Drawable nodes. I'm not sure, because how would the Entity keep track of what it owns in the scene graph?

Also, about sorting my rendering for performance. This is something I'd really like to do, but unfortunately I simply don't know enough right now. I'd have no chance writing it with my current knowledge of graphics. Will my framerate be totally dead if each object renders itself in no particular order? I'm planning on having 30-40 textured meshes (~20,000 triangles each, though I guess it can be reduced) on sceeen at once.

-Alex

Share this post


Link to post
Share on other sites
20,000 triangles * 40 meshes = 800000 triangle per frame on models alone. At 85 frames per second that's 68,000,000 triangles, which is baaaad.

Put it this way. The Unreal 3 technology uses about 3,000-12,000 polygons per character mesh (http://www.unrealtechnology.com/flash/technology.shtml) and can display 5-20 of them at once. This is what they're hoping for when the engine is released. The engine isn't coming out for nearly 2 years. Doom 3 only uses 3000 polygons per mesh (ish).

I suggest you use a slightly lower level of detail [wink]

Share this post


Link to post
Share on other sites
With regards to render states and whatnot, ideally each entity should keep track of it's model, texture, position, angle, current frame/animation, etc. However, resources like models and textures arn't loaded by the entity itself - a model manager and a texture manager load the files and pass a handle back to the entity. That way, multiple entities can use the same model/texture.

So, each entity should have it's own RenderObject, but they should all reference common data. That way you can change an entity's model or texture without causing all other entities of the same type to change as well.

As for sorting, I'm currently sorting by texture, and then sorting by distance (closest to farthest). That way I ensure that each texture is only loaded once but I still keep overdraw down.

I also render the map first and the models second as the map has far less polygons than the models, resulting in faster rendering times.

As a side note, if you're going to render transparent objects then you need to render them after everything else has been drawn, and they MUST be sorted back-to-front (so the far ones are drawn first), otherwise they won't blend properly if a transparent object is in front of another transparent object.

I've ranted enough.

If any of it is of any use then don't forget to rate me (yes I know I shouldn't ask but I'm trying to get off 1019!).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement