Jump to content
  • Advertisement
Sign in to follow this  

Time-Based movement is jittery...

This topic is 4344 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
If your problem is indeed caused by the integrator, then I just got done recently reading this article (pdf) by David Baraff. It describes the Euler method, and then describes problems with it and how to naturally extend it to be much better, using the Runge-Kutta method.

Is your jitteriness such that things don't want to sit still, sorta like they've got a bunch of energy and are just trying to hold it in, like little kids with too much sugar? If so, then your integrator could be a problem. But if the problem is that objects that should be moving smoothly (a cannon ball flying through the air, for example) look like they're moving kind of jerkily, stuttering along the way, then as haphazardlynamed suggested, you might have problems keeping your physics synced with your drawing.

Share this post


Link to post
Share on other sites
I know that there should be more physics steps than rendering steps, but how can you tune the sizes of the time steps to prevent such a problem ?

I've heard people are using interpolation to generate the state of the object for rendering when it falls in between two physics steps, but if there are a lot more physics steps, why is that needed ?

(I'm just adding my question to this topic)

Share this post


Link to post
Share on other sites
Quote:
Original post by voguemaster
I know that there should be more physics steps than rendering steps...


If you desync your physics from rendering then there may be more render updates than physics updates, or the other way around. For example your physics may be stable enough to run at 30Hz, yet your rendering may be running at well over 100FPS.

Edit: To add this rather obvious point: if your physics updates only 30 times a second even though you render 120FPS, it will _look_ like your program is only running at 30FPS because the display will only change every 4th render. Using interpolation you get silky smooth movement in this situation. Even in less extreme cases interpolation should help to get smoother looking movement (e.g. if you're in a situation where you alternate between 1 and 2 physics updates per render update).

[Edited by - MrRowl on June 29, 2006 8:14:48 AM]

Share this post


Link to post
Share on other sites
What you say is true about the desynchronization of the physics and the rendering. However, as I understand it (from books such as Physics for game develops and others), the physics frame rate should ideally be higher than that of the rendering frame rate.

In THAT case, there should be no problem, as long as the number of physics steps per render step is a whole number.

Correct me if I'm wrong.

BTW, how does that lend itself to collision detection where you have to change the step size ? (for example when trying to find the first contact time by bisection method)

Share this post


Link to post
Share on other sites
Quote:
Original post by voguemaster
What you say is true about the desynchronization of the physics and the rendering. However, as I understand it (from books such as Physics for game develops and others), the physics frame rate should ideally be higher than that of the rendering frame rate.


It's not that either one is ideally higher than the other:

1. You render frame rate should either be as high as possible (possibly clamped to the screen refresh rate). In addition, it might be preferable to have the frame-rate not fluctuate - i.e. a steady 30FPS might actually be nicer than quickly alternating between 30 and 60FPS (assuming console game on a 60Hz display). However, a Quake 3 player might want 120 FPS on a high-refresh rate monitor.

2. In a _really_ ideal world your physics algorithms and implementation will be perfectly time-step independant - so that the net result of one timestep of 0.01 sec is _exactly_ the same as two timesteps, one 0.003 and the other 0.007sec. If you had such an algorithm/implementation, then perhaps the best thing to do is just plug in the (variable) render-frame time each update, and let it do its work.

However, you don't have such an algorithm/implementation :)

In practice both the algorithm and the implementation will depend on the timestep. You should work on reducing this dependency as much as possible (i.e. friction and damping should not just be implemented as vel = vel * 0.99 each update - in this case you'd implement something with an (exponential) time-step dependancy).

So - if your algorithm/implementation are sensitive to timestep, it makes sense to always pass in the same timestep each time. This makes reproducing problems _much_ easier, and also means you know that when you've tuned your spring constants (etc) you know they're not going to go unstable when the render frame-rate changes (e.g. when you set a breakpoint and start debugging :).

For physics you want to take as big a timestep as you can (because generally one big step is cheaper than two little ones) - bearing in mind that simulation quality tends to decrease with bigger timesteps, and in the worst case your simulation may become unstable if the timestep is too big. It actually takes quite a bit of work to make a physics engine work with big timesteps - typically a first implementation might work well at 120Hz or above, after some tuning it might work reasonably at 60Hz, and after much love, tweaking and testing it will work down at 30Hz, even for ragdolls etc. Also it depends on the problem you're solving - if your physics engine only has to deal with 3 inelastic spheres at a time, it may run at 20Hz. However, the same engine may require much smaller timesteps to handle piles of ragdolls and elastic objects.

So - this is why de-synchronisation itself is the point. If you have a great physics engine, not many objects, and a fast renderer, there need only be one physics update every few display updates (so interpolation is especially important). If you've got a slow physics engine or many objects that are fast to render, then it's the other way around.

(P4GD is a pretty poor book, btw)

Quote:

In THAT case, there should be no problem, as long as the number of physics steps per render step is a whole number.


If you can limit the render frame time to a certain set of values (e.g. multiples of the screen refresh period) that match the physics update timestep that's true (well, unless, for example, physics is 30Hz and rendering is 60Hz because you'd want interpolation then to stop objects "freezing" every other frame). However, quite often you can't.

One problem with relying on this is that console games will want to be able to run at a fixed 50 or 60Hz (or 25/50Hz if they can't keep up). If all the physics is synced to this it means all the parameters need to be tuned to work reliably at two timesteps, which is a pain. Also the methods used to save/replay games (e.g. for car-racing games) may have problems when you switch between the two frequencies. I think Burnout 2 wouldn't let you use a profile set up with 50Hz to be used at/changed to 60Hz - maybe for this reason.

The worst situation is when you're looking at an object that is moving across your field of vision. If you don't use interpolation and you don't have a constant number of physics steps per render step then you see the movement "jitter" as the fixed-step physics does the best it can to match real-time.

Actually it's kind of complicated, because you need to decide exactly how far to integrate your physics before you actually call the physics integrate function - so that the physics engine finishes just _beyond_ (so you can interpolate between this and the previous state) the real time that the up-coming frame gets displayed. But you don't know that yet! It depends how long (in real-time) the physics update takes, followed by all the render-processing that must follow it...

Quote:

BTW, how does that lend itself to collision detection where you have to change the step size ? (for example when trying to find the first contact time by bisection method)


There is no (time) step size in collision detection itself. Either you're doing overlap tests (i.e. coll detect at either the beginning or end of your physics step, depending on your algorithm), or else you're doing a swept test and any bisection etc is just internal - you're sweeping over two positions (start and end), not over time... (yes... it's sort of over time, but that fact isn't necessary for the collision detection!).

If you mean for collision response - using a physics engine that backtracks to the time of first collision (like in the Baraff notes etc... but unlike most physics engines in use...). Well you could do a number of things - but generally your physics engine will work OK with timesteps that are _less_ than a critical value - so if you have a render update that includes a collision things won't blow up if you do it in two small steps (i.e. integrate, apply impulses, then integrate up till the remainder of the physics step).



Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!