Replay feature on Games

Started by
11 comments, last by zedz 16 years, 8 months ago
Hi, first of all, I'm writting an RTS game. I didn't realize since today that many games have the replay feature. And most of them (Car-racing games, RTS, etc) allow to save it in a relatively small file. So... I have no idea about how to implement it, besides saving frame by frame and make a video, but that's not the implementation they use. Could someone point me to the right direction? There's not much info on Google, so I'm counting on your ideas/suggestion... Thanks Dark Sylinc
Advertisement
These games rely on the deterministic behaviour of computers.

That is, if you fed your game the same inputs, from the same base state, it should replay exactly the same sequence of event.

If the input data is small (and it should be), and your game can be deterministic, then it's a good solution.

Now, you have to identify what constitutes an input. Obviously, the controller inputs (or more precisely, the abstracted player inputs, that usually translate into commands), also, the frame time, if you do not use a fixed time step. And usually, the pseudo-random number generators seeds consitute inputs. If the game is networked, the game packets should also be logged.

Then you have to make sure your game is deterministic, and this is where it's hard. It's also especially hard to debug. Most of the problems are physics related (order of collisions), and if you use multi-threading, you will enter a world of pain.

Second solution is to use an incremental save every frame, and serialise your game state, and storing the delta from the previous stored state. And that is a LOT of work, believe me!

Everything is better with Metal.

Thanks you've cleared my doubts.

Quote:Original post by oliii
Then you have to make sure your game is deterministic, and this is where it's hard. It's also especially hard to debug. Most of the problems are physics related (order of collisions), and if you use multi-threading, you will enter a world of pain.


Yeah, I've already thought that implementing multi-threading in the AI would be painfull for the exact reasons: Synchronisation and order of "what happened first" )

Quote:Original post by oliii
Second solution is to use an incremental save every frame, and serialise your game state, and storing the delta from the previous stored state. And that is a LOT of work, believe me!


That's what I've thought, and that's why I posted this thread

Much thank you for the quick replAy
Dark Sylinc

Isn't it awfully hard to sync input if non-fixed time steps are used?

Say that in the recording run a input is handled by a frame at the time t= 1000. In the playpack that frame is delayed a little bit for some reason and runs at t = 1050. All objects will have traveled a little bit longer than in the recording session and the playback will not be synced. Is there a workaround for this?
I see 2 possible solutions:

1) The data is ran in t=1050; but we use as a parameter t=1000; not all cases can be solved with this

2) Supose the previous input ran at t=500 (and to make it short, in the replay, t was also 500) so we should make u= (1000-5000) / (1050-500) Then use "u" to fix sync problems. To make it easier, we should keep the replay processing thread (I'm calling it a thread, but actually doesn't necessarily mean mulithreading) one or two frames ahead of the render thread. That way we can fix objects that travelled a little longer because we already know how long did it take to compute (t1-t0=50), before it is shown in screen. Sounds easy.... but surely not easy to implement

Dark Sylinc
The replay feature is the one thing that makes me unsure if I should use non-fixed time steps or not. I had an idea of a compromise of some sorts, using a fixed frame-rate, say 20 fps. The game could do all calculations from a "virtual" clock (not the system clock directly). The virtual clock would go about at the same rate as the system clock with the difference that it syncs with the framerate each frame.

So for 20 fps, t(frame) = frame * 1.0/20
The clock would then be interpolated between these "checkpoints" for smooth movements. This might throw of predictions and result in jitter but it's just an idea, haven't tested it yet.

I may very well be missing the whole point of non-fixed steps but go easy on me, it's half past 4 in the morning here :)
Actually such sync is a problem even on AVI files. They have a fixed fps too. Usually around 23-30fps. If the sound start going faster, the video is put in fast forward or directly frames are dropped. If sounds goes slower, the video gets still or the sound skips to the new time. On fast machines like nowadays', it isn't a common problem, but still happens and has to be dealed.
So, I think we all should implement it too, don't we?

Dark Sylinc
Ah! I almost forgot: it is better to use timeSinceLastFrame; instead of using the fps rate, since fps rate is accurate to one second, while "timeSinceLastFrame" is accurate to one frame. Trust me on this. I've tried sync'ing my game with the fps rate and on start up (or when breaked while debugging for example) things looked reallllly weird since it takes some time (from 1 to 2 seconds) to normalize the fps rate if a hughe slowdown appears. timeSinceLastFrame is much better and preferred
Nono, don't misunderstand me, I didn't suggest using a per-second measurement (sort of) :)

Just run the physics 20 times per second, which leaves a dt = 0.05s. Every time the physics are run just set the internal clock to the amount of elapsed frames * 0.05. That way the internal clock is exactly the same at a given frame each time you run the program. Between physic-loops just render as many times as possible.

I did use timeSinceLastFrame to calculate rendering positions before by doing
object.oldpos + object.velocity * timeSinceLastFrame

However, that required me to update the oldpos each frame, using absolute time instead the calculation would be:
object.oldpos + object.velocity * (currentTime - timeAtLastObjectUpdate)
That's a problem with variable timesstep, sure.

Either, decouple your update/render, so you can interpolate the render to give a consitent apparent frame rate, and still run the update with the recorded time step, or on the other hand, just sod it and use the recorded timestep as if it was The Truth. In all regards, if you play the exact sequence of event, the frame time (recorded and real) should be exactly the same, right?

Well, it might not be, but it will simplify your life. Any tiny deviation from the inputs will lead to divergence, and that will amplify very very quickly (after a couple of seconds, things will already start to look very wrong).

Everything is better with Metal.

This topic is closed to new replies.

Advertisement