Sign in to follow this  
schragnasher

Smooth animation needed

Recommended Posts

This is my current game loop. I tried to recreate the Gaffer timestep idea but i think im missing something as my movement still has choppiness issues on some computers.
void cGame::Run()
{
    double accumulator = 0;
    double interpolation = 0;
    double dt = 1;
    while(m_IsRunning && m_Window.IsOpen())
    {
        accumulator += m_Timer.GetDeltaTime();
        while(accumulator>=dt)
        {
            std::vector<cGameState*>::iterator it;
            for(it=m_States.begin();it<m_States.end();it++)
            {
                (*it)->Update();
            }
            accumulator-=dt;
        }
        interpolation = accumulator/dt;
        m_Window.ClearScreen();
        std::vector<cGameState*>::iterator it;
        for(it=m_States.begin();it<m_States.end();it++)
        {
            (*it)->Draw(interpolation);
        }
        m_Window.SwapBuffer();
    }
}

I thought i covered everything, but the Gaffer article seems to pass the dt as well as some other t value to its update function, which i cant seem to understand the usage of. m_Timer.GetDeltaTime() returns milliseconds in case that matters. I can post more code if its needed.thanks!

Share this post


Link to post
Share on other sites
It's hard to say without seeing the rest of the code (ie m_Timer), but generally you want to follow a process like this:

1.) Calculate the time delta of the current frame from the last one.

2.) Use this time delta to drive your animations. Give each frame a length, and subtract the time delta from that length. When it is zero or less, move to the next frame. I suppose it's possible that if the length of each frame is too short, it could cause it to "hang" on a frame, although I've never encountered that problem.

3.) Update the "time since app started" variable that you're using to calculate time deltas.

One thing I noticed in your code is that you're calling Update without passing anything about the time, which seems odd. I see you pass something for Draw, but it would seem to me that you'd do the animation's frame logic in an Update call.

Share this post


Link to post
Share on other sites
What I've done for my game, which appears to work well, is to skip passing time to the update function. Instead, each physics update is quantized (i.e. represents some arbitrary amount of time), and I do a physics update any time my accumulator gains enough time. For example, say I target doing 20 physics updates per second. That means that any time my accumulator has more than 1/20th of a second, I do a physics update, and subtract 1/20 from the accumulator. I spend the rest of the time drawing and sleeping. The draw code takes as an argument the amount of time since the last physics update, expressed as a ratio with 1/20 (for example, if it's been 1/40th of a second since the last physics update, then the ratio would be .5). All of my game objects store their last two positions. They use the ratio to interpolate between the two positions, which creates the desired smooth motion.

Make sense?

Share this post


Link to post
Share on other sites
Quote:
but i think im missing something as my movement still has choppiness issues on some computers.

You are aware that you run your update once every millisecond? I'm guessing some/most computers can't keep up with that.

Quote:
I spend the rest of the time drawing and sleeping

Doesn't that mean that, if you are unlucky, your input may take 1/20th of a second before the game recognize it? Personally I update the physics with a fixed timestep, and the ui with a variable one, like this.

Share this post


Link to post
Share on other sites
Quote:
Original post by schragnasher
This is my current game loop. I tried to recreate the Gaffer timestep idea but i think im missing something as my movement still has choppiness issues on some computers.

*** Source Snippet Removed ***

I thought i covered everything, but the Gaffer article seems to pass the dt as well as some other t value to its update function, which i cant seem to understand the usage of. m_Timer.GetDeltaTime() returns milliseconds in case that matters. I can post more code if its needed.thanks!


This a nice article on game loop addressing same problem : -

http://dewitters.koonsolo.com/gameloop.html

Share this post


Link to post
Share on other sites
my timer code

class cTimer
{
public:
cTimer(){}
~cTimer(){}
void StartTimer()
{
m_OldTime = timeGetTime();
}
double GetDeltaTime()
{
double NewTime = timeGetTime();
m_DeltaTime = NewTime - m_OldTime;
m_OldTime = NewTime;
return m_DeltaTime;
}

private:
double m_OldTime;
double m_DeltaTime;

};



in case it helps, im still trying to get this stuff to work.

Share this post


Link to post
Share on other sites
http://dewitters.koonsolo.com/gameloop.html
this is helping, i seem to have an odd anomoly with my laser beams when they loop around the screen but the animation seems smooth as butter. Thanks.

Now iv got his useless timer though lol since this just grabs the current time and not the deltatime.

Share this post


Link to post
Share on other sites
timeGetTime() is not particularly accurate by default. You need to call timeBeginPeriod(1) at startup to improve its accuracy, although I believe this can seriously increase your CPU usage (not something I've ever worried about for a game) and call timeEndPeriod() at the end of your program.

Direct3D will set this as well if you request certain behaviour in the present parameters.

Quote:
MSDN
Full-screen mode supports similar usage as windowed mode by supporting D3DPRESENT_INTERVAL_IMMEDIATE regardless of the refresh rate or swap effect. D3DPRESENT_INTERVAL_DEFAULT uses the default system timer resolution whereas the D3DPRESENT_INTERVAL_ONE calls timeBeginPeriod to enhance system timer resolution. This improves the quality of vertical sync, but consumes slightly more processing time. Both parameters attempt to synchronize vertically.


The alternative is to base a timer on QueryPerformaceCounter(), but I believe there are still some issues with this playing up on certain multi-CPUs without fix-patches installed.

Share this post


Link to post
Share on other sites
Ok so iv been messing around some more and came to this conclusion that my origional loop was just as good as the dewitters one, i get the same jitter but the gaffer article was easier to understand.

Iv paired it down to make it simple to read and understand and set the updates to 25/second.


double accumulator = 0;
const double TICK_TIME = 1000/25;

while(m_IsRunning && m_Window.IsOpen())
{
accumulator += m_Timer.GetDeltaTime();
while(accumulator>=TICK_TIME)
{
UpdateGame();
accumulator-=TICK_TIME;
}
DrawGame(accumulator/TICK_TIME);
}



Timer class looks like this, iv added the timeBeginPeriod(1) and such to it

class cTimer
{
public:
cTimer(){}
~cTimer(){timeEndPeriod(1);}
void StartTimer()
{
timeBeginPeriod(1);
m_OldTime = timeGetTime();
}
double GetDeltaTime()
{
double NewTime = timeGetTime();
m_DeltaTime = NewTime - m_OldTime;
m_OldTime = NewTime;
return m_DeltaTime;
}
double GetTime()
{return timeGetTime();}

private:
double m_OldTime;
double m_DeltaTime;

};


Im wondering if the jitter im seeing is based on incorrect interpolation basically im taking the value i get here and changing the values when i draw my ship like so...


void cPlayerShip::Draw(cResourceManager* ResourceManager,double Interpolation)
{
ResourceManager->DrawTexture(m_ShipImage,m_X+(m_Vx*Interpolation),m_Y+(m_Vy*Interpolation),m_R+(m_Vr*Interpolation));
}



Basically i add (speed*interpolation) to each value when drawing. Is this correct?

or perhaps im just seeing something that isn't there lol.

Share this post


Link to post
Share on other sites
Quote:
Original post by schragnasher
Basically i add (speed*interpolation) to each value when drawing. Is this correct?


Not sure exactly what you mean, but the basic idea is:

Store the previous position from the last update and the new position after the most recent update:


vec2 last;

void update(float dt) // dt is the fixed logic timestep
{
last=pos;
pos+=speed*dt;
}


When rendering, use the following formula to interpolate between the last and current positions based on the interpolation value:


vec2 interpolate(vec2 pos,vec2 last,float blend)
{
return (pos*blend)+(last*(1.0f-blend));
}

void render(float blend)
{
vec2 p=interpolate(pos,last,blend);

render_me_at(p);
}


Basically the blend factor represents how much time is in the accumulator that there was not time to consume with another timestep, so you are rendering that amount between the previous and current timesteps.

Share this post


Link to post
Share on other sites
Quote:
Original post by schragnasher
hmmm the interpolation is definitely my problem. ill give it another try thanks for the explanation.


You're welcome.

BTW, in my experience float is perfectly okay as a datatype for this sort of thing. I don't think you gain anything visually from using doubles and there could be some performance tradeoffs.

Share this post


Link to post
Share on other sites
Oddly enough i seem to have fixed my main issue without changing my timing code. I had another problem with my window when running the program on windows vista. I fixed that issue by turning off aero, which seems to also be causing a lot of stuttering. My vista box was the machine i had seen a good deal of stutter on. The XP machines had seemed much smoother. So yeah, seems it was aero...

Share this post


Link to post
Share on other sites
Using floating point variables to store the result of timeGetTime() or other similar functions can have some nasty side effects, especially if they are only floats and not doubles. However note that under D3D9 all floating point maths is effectively done at float precision anyway (unless you specify D3DCREATE_FPU_PRESERVE).

The problem is that floats only have about 24 bits (6-7 decimal digits) of precision. Let's say the computer has been switched on for a day or so, and timeGetTime() returns 100,000,000. The call on the next frame returns 100,000,033 (running at 30 FPS). However because you only have those 7 digits of precision subtract those two numbers and you get zero!

It gets even worse when the timer wraps round (which happens every 57 days or so in this case). You get a time delta of around 4 billion milliseconds, which unless you clamp it will mean updating one frame will take a rather long time using a fixed time step...

To fix this always store and subtract absolute time values using an integer. You can then convert the small time delta to floating point safely. I'd also recommend clamping the frame delta to a maximum value of say a tenth of a second. That means that pauses for things like loading won't cause weird issues, and also means the game will still be somewhat playable at really low frame rates (like you might get in debug builds, or when putting breakpoints in).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this