deterministic physics replay

Started by
6 comments, last by raigan 14 years ago
Hello, For my game I need to deterministically replay a physics simulation by reapplying the previously recorded scene inputs. Until now I was using NVIDIA PhysX, but I found out that this engine is not deterministic. So I need to change my physics engine. I'm oscillating between Newton and Bullet (both are supposed to be deterministic). My simulation uses lots of joints so I'm interested in which one is more stable. Can you recommend me which to choose? Is there a better physics engine out there which would suit my needs? Thank you.
2+2=5, for big values of 2!
Advertisement
There are some engine internals, specific to each engine, that could make an engine be nondeterministic. However, I believe on a given computer (e.g., motherboard/processor configuration) you should be able to guarantee a deterministic result...on that computer configuration. The way to do this is to run your simulation with fixed time steps, rather than a variable time step that adapts to the frame rate. The idea is that you would also do the physics update using, say, a time step of 0.015 seconds regardless of frame time. If the frame time is less than that, do not update physics! If the frame rate is more than that, run enough fixed-time-step physics updates to catch physics up to the frame time...so the physics does adapt, just not continuously in sync with frame time. There are a few tricks to implementing this well, e.g., if your physics update takes too long, it'll have the tendency to increase frame rate, which will make you need to run more physics updates, which just gets you into a case where your physics runs forever and the frame never completes. You have to deal with that via balancing, e.g., to make sure the physics update always takes a small fraction of the total fixed time step to actually run through to completion. There are some other threads here that talk about the issue.

The nondeterministic issue can appear with ANY physics engine when it comes to different computers/CPU's. If you want to take inputs recorded on one computer, say an Intel chipset/CPU, and replay the physics using another computer with an AMD chipset/CPU, its pretty much guaranteed you'll get different results. Intel CPU's for example do floating point math using 80 bit precision on chip, then downconvert to single or double precision before storing results in the registers. But I believe AMD's CPU's do 64 bit computations on chip. And the different CPU's could do normalization differently (maybe...not sure what IEEE standards are for this)...if they do, a time step value, or force input value that is a representable floating point number on one CPU might be rounded to a different representable number on the other CPU...without you noticing. This kind of subtle CPU stuff can cause the results to be slightly different even if you try to play back with the same inputs.

So, give fixed time step physics updates a try on one computer, and see what happens!
Graham Rhodes Moderator, Math & Physics forum @ gamedev.net
Hi,

I am using fixed timed steps, I stopped multithreading and I recreate the scene each time I run the simulation but I still encounter those issues. I think PhysX is using a random number generator internally and there is no way to reset it. I have checked the NVIDIA forums and, from what I can tell, the problem is known and there is no workaround for this issue in the current version.

I'm only interested for the simulation to be deterministic on the same machine.
2+2=5, for big values of 2!
Here's some other things to watch out for when trying to make a replay deterministic:

#1 - used fix point math, not floating point. Different computers can give (slightly) different results to the same floating point math. This is actually a bigger problem than you think. I worked on the commercial version of the flash game "line rider" and that game is ALL about deterministic physics. Using floating point math, the playback varied from computer to computer, even with fixed frame rate processing.

#2 - watch your use of rand(). If you use rand() make sure and seed (srand) to the same value at the beginning each time so you get the same random numbers.

#3 - if you have a saved file format, make sure you are saving and loading in the same order. For instance one thing to watch out for would be having a linked list of collidable objects and when saving you loop from front to back writing each out, but when you read it, you put each new node at the FRONT of the list. This reverses the list each time you save/load which can make issues with your determinism because it can change the order in which you test objects for collisions.

Something else you might consider...

If you are only doing replays, you might think about just recording the relevant data every couple frames and when showing the playback data, just interpolate between the recorded data each frame.

This is what we did in line rider for silverlight for the playback feature. I think we did something like only stored information for every 8 frames and then for the frames between used a LERP on the data and it looked totally fine! (:
I wouldn't trust a physics engine for determinism unless it explicitly says so. And it is something very hard to achieve, especially across multiple platforms (or even generations of CPUs).

If your physics is only for eye candy, then non-determinism would be not problematic.

- say, you move a tank across a battlefield in a RTS.
- The tank movement is actually very simple deterministic terrain physics.
- The animation of the tank, the treads on the rough terrain, the recoil from the cannon, the springs, the shell ejection, so on, are simulated using a physics engine, with a specific rigid body controller that would move the tank to its simulated position.

If the simulation diverges after replay / network traffic, it will not affect the 'gameplay'.

So basically, particle systems, character animation, eye candy, simulated destructions, player ragdolls, ... as long as they do not influence the gameplay, can be used non-deterministically.

Having a deterministic game engine is obviously very convenient for minimising replay storage and network traffic, but it's quite hard to pull off, and hard to debug. However, with the proper tools and debug helpers, it's not so bad, specifically because you can always rewind the replay from the start to the point where it diverges with the actual simulation (detecting divergence is an art in itself).

Everything is better with Metal.

One thing I've done before is construct a replay animation. It seems like it would take too much memory, but I've actually done this before using a little compression trick.

For the animation you will need to be able to evaluate the position and orientation of several objects at every moment in time during the replay. Thus you will need to record these positions and orientations. You may also need to store the animation frame a particular character is on, or other stuff like that.

You don't need to store a value for every frame, you just need to be able to determine a value at every frame.

Only store data on particular frames, and linearly interpolate, or use cubic splines to get the frames in-between. That's a simple idea, but how do you decide which frames are keyframes, and which frames are not during the initial run of the simulation?

Here is a simple trick. Always store the current state as a keyframe. After storing the current state as a keyframe, take a look at the last 3 keyframes. If you remove the second to last keyframe, how does the interpolated value compare to the real value? If it is below a pre-defined threshold, then you get rid of the second to last keyframe.

It is possible to store your keyframes sequentially in an array, since when you are writing the array, you are only ever manipulating the last and second to last entries. This makes reading the keyframes much quicker.

I can envision ways in which this scheme might fail, however, when I did it, I was able to get an excellent replay, with no visible error. My compression ratio was around %10, but in some cases it was much lower.

I think I was replaying about 10-15 objects, some of them animated, some of them updated by physics. The keyframe structure stored the replay frame time, the position, the rotation quat, and the animation frame.
Unfortunately, the simulation replay is a core game mechanic and cannot be implemented with keyframes. So can we please get back to the original question?
I see no reason that this cannot be achieved on the same machine. From what I know, both Newton and Bullet should be capable of that.
2+2=5, for big values of 2!
Quote:Original post by Dizzy_exe
Unfortunately, the simulation replay is a core game mechanic and cannot be implemented with keyframes. So can we please get back to the original question?


Jon Blow gave a presentation at GDC 2010 explaining how he implemented playback/rewind in Braid; he used a keyframe approach rather than trying to get a deterministic simulation happening. This let him store an HOUR of 60fps keyframes in 40mb of memory; this works out to a few k per frame so I wouldn't dismiss it out of hand.

http://www.gamedev.net/columns/events/gdc2010/article.asp?id=1808

https://www.cmpevents.com/GD10/a.asp?option=C&V=11&SessID=10427

This topic is closed to new replies.

Advertisement