Graphics independent simulation design

Started by
8 comments, last by theOcelot 14 years, 3 months ago
I'm currently developing a simulator which I would like to be graphics independent. Although the simulator itself is output independent, my intent is to add multiple front ends for the simulation that display its state in text, 3d graphics, or optionally no output. More specifically, I plan to write a simple glut front end, and then later on I will move to a full graphics SDK, such as Ogre. Currently, the simulation works with no output and text output, and I'm starting on the glut renderer. First, a brief synopsis of what the simulator design currently looks like: I have several types of actors in the system, each represented by a separate class. A simulation class contains arrays of each actor type, and represents the overall state of the system. The simulation class contains an update method, which advances the simulation state one "tick", by calling an update method for each actor object in the system. An over-simplified example would look like:

class ActorA
{
    void update(const Simulation& simulation, float timestep);

    Point position;
    Vector velocity;
};

class Simulation
{
    void update(float timestep);

    std::vector<ActorA> a_actors;
    std::vector<ActorB> b_actors;
    ...
};

As you can see, the actor classes contain no attributes related to graphics output. Thus, running the simulation without any graphics output is quite simple--I just instantiate a Simulation object, and call the update method inside a loop. My problem now is that I can't figure out a "clean" method of rendering the simulation state, without storing graphics-related information (model data) in the actor classes. I can think of a few hacks/workarounds I could use for the glut renderer, but I suspect the problem will be even worse when I use a more intrusive graphics SDK, such as Ogre. Based on what I've seen of the API, it looks like each actor will need to be associated with a scene graph node, model data, and possibly more. Can anyone offer some suggestions as to how I can render the simulation, while still keeping the core simulator classes graphics independent?
Advertisement
I'm not exactly sure if this would work but maybe an idea

class glutActorA : public ActorA{    // glut specific stuff};class OgreActorA : public ActorA{    // Ogre specific stuff};class IrrlichtActorA : public ActorA{    // Irrlicht specific stuff};// etc...
Adding the graphics information at the outermost inheritance layer is good way to minimize the relationship with the simulation classes, but I've found two problems with such an approach:

1. Each actor class will require another derived class, whereas this duplication could be prevented if the graphics information was stored in the actors' common base class. However, this might be a non-issue if I use a template class to pair each actor with the graphics information.

2. This means my "Simulation" class will no longer be graphics independent, because instead of storing vectors of ActorA, ActorB, etc, the Simulation will have to store vectors of OgreActorA, OgreActorB, etc. This might just be an indicator that my Simulation class is not well defined or cannot be graphics independent, but I rather liked the layer of abstraction it provided.

Basically, what I need is a way to associate each actor with graphics information, without actually storing the information inside the class. Either that or my design is messed up somewhere earlier in the class hierarchy.
I think you can do this?

ActorA* actor = new ActorA();
ActorA* ogreActor = new ogreActorA();
ActorA* glutActor = new glutActorA();

so you could still keep storing ActorA's in your simulation class

you would have methods like
createActorsAsActors();
createActorsAsOgreActors();
createActorsAsGlutActors();
heres what an Actor class in Ogre could look like
in case it helps you figure something out

class Actor{protected:	Ogre::String name;	// 	Ogre::Entity*    entity;	Ogre::SceneNode* sceneNode;public:	Ogre::Entity* getEntity() { return entity; }	Ogre::SceneNode* getSceneNode() { return sceneNode; }	// 	Actor(); ~Actor();	// 	virtual bool create(Ogre::String actorName, Ogre::String meshName);	virtual void destroy();};


//----Actor::Actor() : name("Actor0"), entity(NULL), sceneNode(NULL){}Actor::~Actor(){}bool Actor::create(Ogre::String actorName, Ogre::String meshName){	if (entity) return false;	this->name = actorName;	entity = Game::sceneMgr->createEntity(name, meshName);	sceneNode = Game::sceneMgr->getRootSceneNode()->createChildSceneNode(name+"_Node");	sceneNode->attachObject(entity);	sceneNode->setPosition(Ogre::Vector3(0.0f, 0.0f, 0.0f) );	return true;}void Actor::destroy(){	if (entity)	{		Game::sceneMgr->destroyEntity(entity);		entity = NULL;	}}
Probably the best approach for you would be to carefully design an interface for the actors that gives outside entities all the information they need to render the objects on their own. The kind of information exposed by each actor depends on what exactly you want to show, but it probably includes things like position, transform, maybe velocity, and whatever type and state information is needed. The key thing is that Actors define the language that is used to describe them, rather than being dependent on the language of a particular graphics SDK or whatever.

How this information is used is entirely up to the given system. I would recommend exposing strings as type information, as they are easy to work with. For graphics, you would map a type string to a set of models and animations, and use the exposed state information to trigger animations and such. I'm curious how your text output currently works.

Edit: Inheritance is almost exactly the wrong solution here. Inheritance is one of the strongest possible couplings between classes, and it's not remotely necessary. Imagine what would happen if you had to change the members of ActorA. Not only would ActorA.cpp need to be recompiled, but so would OgreActorA.cpp, GlutActorA.cpp, TextActorA.cpp, and any other subclasses. Not only would they need to be recompiled, but they would likely have to be changed, because they would be directly using the members inherited from ActorA. That's not even counting inter-actor dependencies at all levels of the inheritance hierarchy, which are almost bound to be there.

[Edited by - theOcelot on December 28, 2009 5:59:42 PM]
Quote:Original post by theOcelot
Probably the best approach for you would be to carefully design an interface for the actors that gives outside entities all the information they need to render the objects on their own. The kind of information exposed by each actor depends on what exactly you want to show, but it probably includes things like position, transform, maybe velocity, and whatever type and state information is needed. The key thing is that Actors define the language that is used to describe them, rather than being dependent on the language of a particular graphics SDK or whatever.

How this information is used is entirely up to the given system. I would recommend exposing strings as type information, as they are easy to work with. For graphics, you would map a type string to a set of models and animations, and use the exposed state information to trigger animations and such. I'm curious how your text output currently works.

As you suggested, each actor's public interface exposes things like the transformation/position, velocity, and a variety of things specific to each actor type. The text output currently functions by intermittently printing out these attributes, as well as success/failure information at the end of the trial.

However, in order to render the state of the simulation, rather than just displaying actors as points, I would like each one to be associated with a particular model (or scene node for Ogre). One idea that came to mind is I could simply store a parallel array of models outside the simulation class, for example:

class Simulation{    const ActorA& getActorA(int i) const { return a_actors; }    const ActorB& getActorB(int i) const { return b_actors; }    ...    void update(float timestep);    std::vector<ActorA> a_actors;    std::vector<ActorB> b_actors;    ...};class GlutSimulation{    void RenderSimulation()    {        for each ActorA:            Render(simulation.getActorA(i).getTransform(), a_models);        ...    }     Simulation simulation;    std::vector<Model*> a_models;    // one model pointer for each ActorA    std::vector<Model*> b_models;    // etc    ...};


The main problem with this, is that the Simulation class is no longer free to create or remove actors on its own, otherwise the parallel model arrays won't stay in synch. As a result, the Simulation class becomes sort of like a stl container class, where the order of elements is fixed unless externally modified. If I want to add or remove actors, the simulation class needs to provide addActor and removeActor methods, which would then be called by the GlutSimulation.

On second thought, this doesn't sound like such a bad idea. Although, if some sort of event during the simulation update requires an actor to be created or removed, I'm not really sure how I should flag the event for the GlutSimulation to handle it.
One thing you could do is to have Simulation expose a single list of all Actors, and have the other classes iterate the whole thing every frame, drawing them based on whatever data it uses to determine how they should look based on their state. Roughly:
void GraphicalOutput::Render(){    for(actor in simulation.all_actors){        //get reference to the right model        //from internal actor-->model information        current_model = GetModel(actor.type)        //put a copy of the model in the scene        //I don't know enough about 3D to write convincing psuedo-code :)        DrawModel(current_model, actor.position, actor.transform)        //before this point, you might grab information from the actor        //and use it to pick an animation pose, and draw that    }}


Basically, have a list of all the different kinds of models used, and which Actor types need them, and pick which one to use dynamically on a per-actor basis. This alone shouldn't be too expensive, and would work in a system that completely re-creates the scene every frame (I'm pretty sure that's what happens underneath at some level), but doesn't cooperate with something like Ogre (apparently), which wants you to keep a certain object from frame to frame. You could have some signaling method for Actors to tell whoever is interested that it is about to go away, but that could get messy.

I'll have more to say tomorrow. The approach I personally use in my game might be more appropriate after all.
This is how I have implemented graphic independence:

renderable.h
class renderable {public:	virtual void render(renderer*) const {}};


renderer.h
class renderer {public:	// Constructs the renderer given the window to render to.	renderer(window*);	// Virtual destructor.	virtual ~renderer() {}	// Renders all of the renderables in the render queue and clears	// the queue for the next frame.	void render();	// Pushes the given renderable on the render queue to be rendered at the	// given transformation.	void push_renderable(const renderable*, const matrix44f*);	// Clears the color and depth buffers.	virtual void clear() = 0;	// Texture methods.	virtual const unsigned upload_texture(const texture*) = 0;	// Matrix methods.	virtual void push_matrix() = 0;	virtual void mult_matrix(const matrix44f&) = 0;	virtual void pop_matrix() = 0;	// Vertex buffer methods.	virtual void begin() = 0;	virtual void push_vertex(const vertex&) = 0;	virtual void end() = 0;protected:	window* target;private:	std::list<std::pair<const renderable*, const matrix44f*>> render_queue;};


renderer.cpp
renderer::renderer(window* target) :target(target) {}void renderer::render() {	if (!target)		return;	clear();	auto draw_func = [&](std::pair<const renderable*, const matrix44f*>& r) {		this->push_matrix();		this->mult_matrix(*r.second);		r.first->render(this);		this->pop_matrix();	};	std::for_each(render_queue.begin(), render_queue.end(), draw_func);	render_queue.clear();	target->swap_buffers();}void renderer::push_renderable(const renderable* object, const matrix44f* transformation) {	if (!object || !transformation)		return;	render_queue.push_back(std::pair<const renderable*, const matrix44f*>(object, transformation));}


//example renderable
class blah : public renderable {public:    void render(renderer* rndr) {        rndr->begin();        // push vertex data, bind texture, etc                rndr->end();    }};


Not the best solution, but it is working for me. I have also added materials, so the renderer can sort by material to minimize state changes.
c_olin's solution is close to what I do. My names are different, but his terminology is more appropriate here, so I'll borrow it:
//abstract interfaces are all that's seen by//the "graphics independent" portion of the program//owned by Actors and renderersclass IRenderable{    public:    void SetTransform() = 0;    void SetPosition() = 0;    //all the other stuff needed    //these methods are called by the Actors    //to update their representation};class IRenderer{    public:    //called by Actors to get an IRenderable    //id_string is a map key to all the info (models, etc) required to render the actor    //I would actually recommend returning boost::shared_ptr    IRenderable* CreateRenderable(string id_string) = 0;    //information like where to render are passed into implementation constructors    void Render() = 0;    void Update() = 0; //housekeeping}
//just the relevant portion of an Actor classclass Actor{    IRenderable* renderable;    public:    Actor(IRenderer* renderer):renderable(renderer->CreateRenderable("my_id_string"){}    void Update(){        //do internal update logic        //then call methods on renderable to update rendering        renderable->SetTransform(my_transform);        renderable->SetPosition(my_position);    }};Then for each front-end, you have a set of implementations for IRenderable and IRenderer. For example:class GlutRenderer: public IRenderer{    some_contatiner<GlutRenderable*> renderables;    public:    GlutRenderer(...){        //figure out where to render    }    //C++ will figure out we mean the same function as in IRenderer    GlutRenderable* CreateRenderable(string id_string){        GlutRenderable* new_renderable = new GlutRenderable(...);        //load the right model into the renderable        //based on the string        //now add it in        renderables.push_back(new_renderable);        return new_renderable;    }    void Render(){        //render all the renderables    }    void Update(){        //here is probably where you want to remove dead renderables    }};


You probably get the idea. The main problem is that if you have to change the main interfaces, everything has to change. The efficiency of the scheme rests on the stability of the interfaces. But you probably have a better idea of what information needs to be passed than when I started. If you start off with only one or two implementations, it should work fine.

I want to re-iterate my recommendation to use boost::shared_ptr. It's much easier to collect obsolete renderables that way. What I do now is to keep shared_ptrs to the renderables in both Actor and Renderer. Then when I update, I go through the collection looking for unique shared_ptrs and remove them, which causes their contents to be destructed. It's not the most elegant, but it still works great. This can be used for either of the approaches I've described.

Alternatively, you could add a KillMe() flag to IRenderable, which is checked by the renderer as its looking for renderables to remove, but that could be dangerous, as there's nothing to prevent an actor from hanging onto and continuing to use its renderable after it sets the flag, which is just asking for a crash. shared_ptr is really what's called for here.

This topic is closed to new replies.

Advertisement