Sign in to follow this  

Smooth Timer based animation

This topic is 4814 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I have a demo app ( which basically displays a screen full of rotating cubes in OpenGL ) that I have converted from frame based animation to time based animation. However I have a problem. Whereas before (frame based) I had smooth animation now my animation is jerky. the code thats doing this goes like: ticks = timeGetTime(); advanceTime = ticks - lastTicks; lastTicks = ticks; //then later rotY += (50 * advanceTime)/1000; rotY += (50 * advanceTime)/1000; rotY += (50 * advanceTime)/1000; //later //..render code This suggests that I'm not getting a constant advanceTime through each frame, and that my cubes are rotating by different amounts each frame ( hence the jerkiness ). However I have a consistent frame rate so I don't know how this would be the case. How do I get smooth animation through timer based animation without this happening? Mark

Share this post


Link to post
Share on other sites
Despite units of milliseconds, timeGetTime does not return the time to the nearest millisecond. For me, it has a resolution of about 17 ms.

What that means is that if my frame rate is 100 fps, then the duration of the frames as reported by timeGetTime might look like something like this:

0
16
0
16
17
0
16
0
16
17
...

That might be the cause of your jerkiness. Try switching to QueryPerformanceCounter, which has a much better resolution.

Share this post


Link to post
Share on other sites
it's most definitely true that timeGetTime() does not return high resolution time... but I don't think that is the problem in this case:

I think you're just moving your objects too far per millisecond

rotY += (50 * advanceTime)/1000;
rotY += (50 * advanceTime)/1000;
rotY += (50 * advanceTime)/1000;

consider that 50/1000 radians translates into something like 2.86 degrees per millisecond... that doesn't seem like a lot... but now consider what happens when you get the strange sequence of values that JohnBolton recorded:

if your code decides that 17 ms has passed, whether that is true or not, your objects rotate 48.7 degrees... that sounds like a lot to me for one frame... also the average frame in my applications is rendered in more than 40 milliseconds so your algorithm would have my objects rotating upwards of 114 degrees per frame...

I would try to bring down radians/degrees per millisecond and see if it gets any smoother

Share this post


Link to post
Share on other sites
First of all I forgot the mention I'm calling

timeBeginPeriod(1);

successfully, which sets the minimum timer resolution
to 1ms. See the following interesting article on timers
at http://www.geisswerks.com/ryan/FAQS/timing.html
I have'nt confirmed this works though.

rotX = (50 * advanceTime)/1000;
rotY = (50 * advanceTime)/1000;
rotZ = (50 * advanceTime)/1000;

..anyway the number in the above code is
in degrees not radians, and I divide by 1000
to get the advanceTime in seconds ,so effectively
I'm saying "rotate 50 degrees per second" which
at my frame rate ( 60 ) is less than 1 degree per
frame which does'nt seem too high.

All the same, jerkiness was noticable even for values
as low as 1.

I still might try QueryPerformanceCounter to see if
that makes any difference

Mark








Share this post


Link to post
Share on other sites
After thinking about it a little more, I realized that even if you were rotating by a lot per millisecond, that wouldn't cause jerkiness... in theory it would just make it look like it's spinning fast...

I don't know what the problem is... I've been using timeGetTime() for frame rate independent motion for some time now... and even without using timeBeginPeriod, it seems to work very well on some very diverse machines...

I'd be interested in knowing if switching to QueryPerformanceCounter solves your problem...

maybe there is something else going on in your code that you wouldn't suspect to be the problem

Share this post


Link to post
Share on other sites
you can always just draw all your frame times as a simple graph and see if you have any strange spikes and if you do, look hard at the numbers to find out whats causing them.

of course, if you dont do much than new cards will easily draw a frame in less than one ms leaving you pretty screwed with timers like that (though at 60fps that shouldnt happen)

Share this post


Link to post
Share on other sites
Quote:

Try switching to QueryPerformanceCounter, which has a much better resolution.



Perhaps my knowledge of this is very outdated (circa '96), but "QueryPerformanceCounter" may be a dangerous idea. Not all platforms have support for it. Of course, if your not distributing anything, it's a moot point =)

Share this post


Link to post
Share on other sites
I fixed the problem, by changing the variables I was using
to calculate the advanceTime from floats to unsigned ints,
since they were only in units of 1ms(no lower), and only
converting to seconds ( /1000) at the last step when calculate
the amount to rotate.

But I need a sanity check. I only found this out by running
the following timer test code, once with unsigned ints and
again with floats ( with the appropriate cast on timeGetTime)
The results are below are a bit bewildering.


typedef struct timestruc
{
unsigned int currentTicks;
unsigned int msElapsed;
} TimerInfo;

TimerInfo timedata[1000];

memset(timedata,0,sizeof(timedata));
timeBeginPeriod(1);
for ( i = 0; i < 1000; i++ )
{
timedata[i].currentTicks = timeGetTime();
timedata[i].msElapsed = timedata[i].currentTicks - lastTicks;
lastTicks = timedata[i].currentTicks;
Sleep(10);
}
timeEndPeriod(1);
//..print out results of test here


With unsigned ints:
Current Ticks,Advance Time
Time: 760835184,ms: 760835184
Time: 760835194,ms: 10
Time: 760835204,ms: 10
Time: 760835214,ms: 10
Time: 760835224,ms: 10
Time: 760835234,ms: 10
Time: 760835244,ms: 10
Time: 760835254,ms: 10
Time: 760835264,ms: 10
Time: 760835274,ms: 10
Time: 760835284,ms: 10
Time: 760835294,ms: 10
Time: 760835304,ms: 10
Time: 760835314,ms: 10

With floats:
Current Ticks,Advance Time
Time: 762895552.00,ms: 0.00
Time: 762895552.00,ms: 0.00
Time: 762895616.00,ms: 64.00
Time: 762895616.00,ms: 0.00
Time: 762895616.00,ms: 0.00
Time: 762895616.00,ms: 0.00
Time: 762895616.00,ms: 0.00
Time: 762895616.00,ms: 0.00
Time: 762895616.00,ms: 0.00
Time: 762895680.00,ms: 64.00
Time: 762895680.00,ms: 0.00
Time: 762895680.00,ms: 0.00
Time: 762895680.00,ms: 0.00
Time: 762895680.00,ms: 0.00
Time: 762895680.00,ms: 0.00
Time: 762895744.00,ms: 64.00
Time: 762895744.00,ms: 0.00


While I've solved the problem, if anyone could shed some
light on this discrepancy I would be most grateful.

Mark

Share this post


Link to post
Share on other sites
can someone clarify the following point.

my frame rate can never go above the refresh rate of the
display hardware. I'm running this on a laptop with a display
refresh rate of only 60Hz, so any applications I run can never
go above 60fps, despite the processing and GPU power I am
throwing at it.

Mark

Share this post


Link to post
Share on other sites

It sounds like your video updating code is waiting for your monitor's vertical sync before updating the screen. I don't know whether you're using SDL+OpenGL or something else, but you could try disabling double buffering if you want to update the screen faster than your monitor's refresh rate.

Share this post


Link to post
Share on other sites
OK, I think I can explain your problem with floats vs. uints. You were casting the value returned from timeGetTime() to a float straight away, before doing anything else, right? Well, floats are only accurate to about seven significant figures, and the values you were casting to float were longer (8 or 9 sig. fig.) so they were getting truncated. For instance, the number 760835274 is represented just as that integer by a uint, but as 7.608352E8 as a float - note that the last two digits of the mantissa have been truncated.

Of course, it's not quite as clear cut as this since the mantissa is stored in binary, not decimal, but you get the idea.

Share this post


Link to post
Share on other sites
Quote:
Original post by MarkyMark
can someone clarify the following point.

my frame rate can never go above the refresh rate of the
display hardware. I'm running this on a laptop with a display
refresh rate of only 60Hz, so any applications I run can never
go above 60fps, despite the processing and GPU power I am
throwing at it.

Mark

If you wait for vertical refresh, then you're capped at 60Hz. If you disregard vertical refresh rate, your will render faster than 60Hz but you will have shearing.

Your internal frame rate is not dependant on your monitor, only your main loop - which can possibly be many times your refresh rate. You might be confusing frame rate with rendered frames. (assuming you don't skip any frames while rendering)

Hope that helped.

Share this post


Link to post
Share on other sites

This topic is 4814 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this