# Time based animation

This topic is 4857 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

In order to get a model animating at the right speed, I need to use time to calculate the duration between the last rendered frame as to interpolate the right amount. I just added a pre-render and post-render function to my ISceneGraphNode object, and a timer to the scene graph.
/**
* All scene graph nodes will be derived from this class.
*/
class ISceneGraphNode	:	public IMBase
{
protected:

util::list<ISceneGraphNode*> m_childList;	///< Holds all the child nodes of this node
scene::ISceneManager* m_sceneManager;		///< Pointer to the scene manager for rendering and other

public:

/**
* Constructor
* \param smgr A pointer to the scene manager
*/
ISceneGraphNode(scene::ISceneManager* smgr)
:	m_sceneManager(smgr)
{
// We are not calling smgr->get() because we want the graph to be destroyed when
// smgr->drop() is called.
}

/**
* Copy constructor
* \param i A const reference to an object derived from ISceneGraphNode
*/
ISceneGraphNode(const ISceneGraphNode& i)
:	m_sceneManager(i.m_sceneManager)
{
// We are not calling smgr->get() because we want the graph to be destroyed when
// smgr->drop() is called.
}

/**
* Destructor
*/
virtual ~ISceneGraphNode()
{
util::list<ISceneGraphNode*>::iterator i = m_childList.begin();

for( ; i != m_childList.end() ; i++)
(*i)->drop();	// reduce the ref_count for all child nodes

m_sceneManager->drop();
}

/**
* Add a node to the child list.
* \param n A pointer to a ISceneGraphNode derived object
*/
{
m_childList.push_back( n );		// add n to the list
}

/**
* Render all the child nodes.
*/
virtual void render()
{
util::list<ISceneGraphNode*>::iterator i = m_childList.begin();

for( ; i != m_childList.end(); i++)
(*i)->render();	// render child node
}

/**
* Things to do before rendering, like updating.
* \param delta_time The time (in seconds) since the last rendering frame
*/
virtual void preRender(float delta_time)
{
util::list<ISceneGraphNode*>::iterator i = m_childList.begin();

for( ; i != m_childList.end(); i++)
(*i)->preRender(delta_time);	// render child node

}

/**
* Things to do after rendering.
* \param delta_time The time (in seconds) since the last rendering frame
*/
virtual void postRender(float delta_time)
{
util::list<ISceneGraphNode*>::iterator i = m_childList.begin();

for( ; i != m_childList.end(); i++)
(*i)->postRender(delta_time);	// render child node

}

};


/**
* Render all objects in the scene graph.
*/
void CSceneGraph::render()
{
util::list<ISceneGraphNode*>::iterator i = m_nodeList.begin();

for( ; i != m_nodeList.end(); i++){
(*i)->preRender( m_hiResTimer.getElapsedSeconds() );
(*i)->render();
(*i)->postRender( m_hiResTimer.getElapsedSeconds() );
}
}


This was in order to animate an md2 model properly:
/**
* Update the animation.
* \param delta_time The time since the last update.
*/
void update(float delta_time)
{
m_modelTime += delta_time;
if ( m_modelTime >= 1.0f / MD2AnimationList[m_animState].fps )
m_modelTime = 0.0f;

updateInterpolation( m_modelTime / ( 1.0f / MD2AnimationList[m_animState].fps ) );
}

/**
* Pre-render things, like updating the animation.
*/
void preRender(float delta_time)
{

ISceneGraphNode::preRender(delta_time);
}

/**
* Render the model using indexed vertex arrays.
*/
void render()
{
if(m_texture)
m_texture->apply();	// apply the texture

// draw the indexed triangle list
m_sceneManager->getVideoDriver()->drawIndexedTriangleList(
m_displayVertexList.getPtr(), m_displayVertexList.getSize(),
m_displayIndexList.getPtr(), m_displayIndexList.getSize() );

ISceneGraphNode::render();	// render all the children
}


How does this all look? Or am I way off on the way it should be?

##### Share on other sites
Also, to restrict the rending to a specific fps, I've just written this:

/** * Render all objects in the scene graph. */void CSceneGraph::render(){	util::list<ISceneGraphNode*>::iterator i = m_nodeList.begin();	// the time between frames to ensure that it doens't run above 100 fps	float time_between_frames = 1.0f / 100.0f;	float time = 0.0f;	while( (time += m_hiResTimer.getElapsedSeconds()) < time_between_frames );	for( ; i != m_nodeList.end(); i++){		(*i)->preRender( m_hiResTimer.getElapsedSeconds() );		(*i)->render();		(*i)->postRender( m_hiResTimer.getElapsedSeconds() );	}}

I mean, it could theoretically be right, but, with dividing by 100 intended to be 100 fps, it runs at 80. But, that could be because my md2 model isn't animating properly.

##### Share on other sites
There are two ways to limit it to a certain fps. Just wasting time in an empty loop is the one I like to consider pointless. You could use it in a somewhat better way if you do something like:

if (time_passed > physics_update_delay) do_physics (or ai, or rendering).

Unless you set all the delays the same you can offset the different things a bit, do less important stuff less frequently and most of all only draw a new frame when the scene was actually updated.

Of course you will still get in trouble when the machine can't keep up, if you use fixed time steps to make calculations easier and more consistent they might add up and make the app get behind more and more unless you start dropping updates. If you still use the actual time that has passed you should be safe from that (but end up with uglier math).

Advantage: non-linear movement can be broken down in small and fixed linear movements. Results will always be the same on all machines that are fast enough. Achieving the same with arbitrary time steps requires more complex math. Much more comfortable for input recording or networking (ask the poor guys on X-Wing vs. Tie-Fighter)

Disadvantage: Slower machines can't keep up and require switching to less but larger time steps, creating different results from fast machines. Or they run slower for a while whenever they are drowned in accumulated updates, though it would be hard to predict if the complexity of the updates won't stay high, meaning it can never catch up again. Basically the result will be that an update is done every single frame (or tick, loop, whatever might be a good name). So it will look

##### Share on other sites
Changed it a little.

/** * Render all objects in the scene graph. */void CSceneGraph::render(){	util::list<ISceneGraphNode*>::iterator i = m_nodeList.begin();	// the time between frames to ensure that it doens't run above 100 fps	float time_between_frames = 1.0f / 290.0f;	static float time = 0.0f;	while( (time += m_hiResTimer.getElapsedSeconds()) < time_between_frames );	for( ; i != m_nodeList.end(); i++){		// use these three to get the constant frame rate going		(*i)->preRender( time );		(*i)->render();		(*i)->postRender( time );	}	// reset time to 0.0f because we have just drawn a frame	time = 0.0f;}

Here's my question for the moment (keeping in mind the previous two still hold): While I would like to get rid of the while loop, because this is intended to be a graphics engine, if I just have an if statement and a 'return' if it is not ready to render yet, it goes back to the main loop and resets the modelview matrix, so I obviously get such an absurd amount of flickering, I can't see the model at all.

How should I fix this? Should I not be resetting the modelview matrix every iteration of the loop?

##### Share on other sites
Are you really sure that's even the problem? If you don't render anything then it shouldn't matter if you change the matrix, wipe it, reset it, sneeze on it or lock it up with the neighbours pitbull. As long as it will have the right value before you draw anything again. Now, clearing the color buffer with every iteration and only drawing every once in a while WOULD be a really bad idea (unless you at least don't swap the buffers every time.. then it would just be like me.. useless, but harmless).

Another small thing is depending on what you are doing in your pre and post render functions. In a perfect world all preparation would be happening in one big block right after having sent all the rendering calls for the last frame to make them work in parallel as much as possible. Ie. collecting all visible models should be a block, updating ALL animations of visible objects should be one and rendering ALL visible objects should be a single block. Interleaving all that may easily mean you sent a little bit of work to the gpu and it's long done before you get around to send the next piece of work, meaning it will spend a lot of the time not doing anything.

That's more optimizing than making it work, but it's the kind that requires rearranging quite a bit.

About that 80 instead of 100 issue you could try a few things. What are you using to get the time? Did you try double instead of float (just in case) or getsomeTime()-lastTime > delay to avoid potential weirdness with really small numbers?

##### Share on other sites
Well, am I really interleaving it? I mean, I only do one call to CSceneGraph::render, but each ISceneGraphNode::preRender calls preRender on its children, etc, and the same for postRender.

Or should I have seperate calls to CSceneGraph::preRender and CSceneGraph::postRender?

##### Share on other sites
Now that you mention it. I must have lost my orientation somewhere and missed that the scene graph is a container and not just the root node.

##### Share on other sites
try this formula (i got it in this forum but cannot remember who it was):

[source="pascal"]// livefor is time the current animation time length// framecount is number of frames in animations// age is current time - the time when animation has started (age:= runningtime - born)calc:= livefor div framecount;calc:= age div calc;spritewanted:= calc;

##### Share on other sites
Quote:
 Original post by TriencoNow that you mention it. I must have lost my orientation somewhere and missed that the scene graph is a container and not just the root node.

It's an easy mistake to make considering I only gave you a single function and nothing else from the class. [smile]

##### Share on other sites
Previously, the md2 animation was going too fast when I was running this code:

/** * Update the animation. * \param delta_time The time since the last update. */void update(float delta_time){	m_modelTime += delta_time;	if( m_modelTime >= 1.0f / MD2AnimationList[m_animState].fps )		m_modelTime = 0.0f;	updateInterpolation( m_modelTime / ( 1.0f / MD2AnimationList[m_animState].fps ) );}

and that was before I had the fps restrictions. Was the reason it wasn't working because I wasn't restricting the fps? Because it looks okay now, but I'm just not sure.

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 24
• 15
• 36
• 12
• ### Forum Statistics

• Total Topics
634823
• Total Posts
3019459
×