• Advertisement
Sign in to follow this  

Game Timing, I/O and how to sort it out for a realtime game.

This topic is 4431 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello all, Firstly let me apologise if this information is covered somewhere else, I have searched on various forums etc. and come up with various stuff but would like to open a general discussion, if there are any developers out there who could give me some tips on how the pros do it I would be very grateful! My background is EE and I am used to programming in embedded environments, along with a bit of Win32, OpenGL, and various other things. Having sat down and decided to write a simple realtime 3D game (with possibly the option to make it multiplayer over TCP/IP) (WinXP/VC++6 is my dev platform with SDL and OpenGL) I am a little stuck at the first hurdle - the overall 'game loop', timing considerations (for constant game timing independent of framerate) and input gathering (mouse and keyboard). So a simple game loop might be: while(1){ DrawFrame() GetMouseAndKeyboardState() GetAccurateTime() UpdateGameWorldBasedOnTime() } Now my initial problem was getting a decent high resolution timer on Windows, (SDL only has the GetTicks() function which is in milliseconds and i believe only accurate to 10ms anyway! - no good!) After a little research it seems the QueryPerformanceCounter() functions in Win32 are the way to go. (Looks like I will have to encapsulate timing stuff and find an alternative for each platform Linux/Win/etc - bummer). Ok, so I thought great, I can base my game world on real time and all is good. Until I realised that just sampling the keyboard once every game-loop iteration is really not such a good way to do thigs. - For example on a machine that can render 25fps, the keyboard/mouse is sampled 25 times per sec, on a faster machine the sampling resolution could be e.g. 100fps/100s/sec. Obviously the guy with the faster machine has an advantage not just in framerate! So my quesion is, how do we deal with this? - What is the best way? On an embedded system I would probably have an interrupt funciton tied to key presses and buffer up the key press information with the accurate time of the press, and the game update function then has all the data it needs. What do we do on big bad multitasking OS's? Some thoughts are: 1. Set up some kind of callback thread to sample at say 100Hz, and package up the key/mouse data with time data so that the UpdateGameWorld() function can deal with key presses 'correctly'. (Remembering that callback functions are not very accurately timed either, but hopefully ~100Hz is adequate...) 2. Just ignore the problem? 3. Rely on the SDL message loop somehow, keypresses are buffered, but no accurate timing information? Am I missing something obvious here though? (I am not a games programmer -yet!) Any thoughts and insight greatly apreciated!! Perhaps I could even wite an article about it once solved..... James.

Share this post


Link to post
Share on other sites
Advertisement
Yeah, locking input to your visual frame rate is bad.

Why does your game run at only 25fps? Do you limit it to that? Or are your draw methods just really slow ;-)

Have you tried using threads? A seperate graphics thread (or seperate input thread) might smooth things up.

Share this post


Link to post
Share on other sites
Quote:
Original post by jra1980
For example on a machine that can render 25fps, the keyboard/mouse is sampled 25 times per sec, on a faster machine the sampling resolution could be e.g. 100fps/100s/sec. Obviously the guy with the faster machine has an advantage not just in framerate!

Well, what good would it be if you sample for input more often than you redraw the screen?
Usually, the input from the player comes as a response to what you see on the screen. You see an enemy, you fire your gun. If you don't see the enemy, because the framerate is low, well, you're not going to fire your gun, and so, does it matter whether you sample input every frame, or 4 times per frame?

If you can't see what's up ahead in a racing game, then does it matter whether the game registers that you try to turn in between frames?

Just a thought. (I'm aware that locking the game logic to your framerate is often a bad idea, but I can't see the big issue with input in particular)

Share this post


Link to post
Share on other sites

Quote:

Well, what good would it be if you sample for input more often than you redraw the screen?
Usually, the input from the player comes as a response to what you see on the screen. You see an enemy, you fire your gun. If you don't see the enemy, because the framerate is low, well, you're not going to fire your gun, and so, does it matter whether you sample input every frame, or 4 times per frame?


My argument against this is: Imagine that 2 players on 2 different machines (one fast one slow) are playing against each other in some FPS (First Person 3D shooter). They both are facing each other and both *at the same instant in time* fire at each other (press the fire button). Now the guy with the slower machine, with lower keyboard sampling rate, is at a disadvantage as it is more likely that the player with the faster machine will have his keyboard sampled first, and hence fire first.
Also, the human brain is amazingly good at predicting where & what is going on, even if your framerate goes a bit choppy in a game (or is low). Therefore I argue that the game world should update correctly (well, keyboard/mouse sampling should be fast enough on any machine - and the game world update take account of *when* keypresses occurred to a reaasonable accuracy.)

Quote:

Why does your game run at only 25fps? Do you limit it to that? Or are your draw methods just really slow ;-)


As mentioned I work with embedded systems mostly, 25fps on a phone is a reasonable framerate! ;-) It was really just for illustration....

As far as using seperate threads goes, well it is possible I guess, but seems possibly more complicated than necessary? I could be completely wrong here of course. It would be good to know how e.g. Quake3/Doom3 or new FPS type games sort this problem. I guess I could download the Q3 code and have a look but it would probably take a long time to figure it out....! :)

James.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:

GetMouseAndKeyboardState()...


Actually it is not that for every interaction in the loop you have to check the keyboard & mouse device states; what you have to do is to set up two events handlers (for the keyborad for example) One event is triggered when any key is pressed and it will return what key was pressed. the other event is triggered when any key is released and it will return what key was released.

You have to use a boolean variable for every key you are interested to use in your game, In the key_pressed event you make true the variable and in the key_released event you make false the variable.

So in your loop game what you are checking is if the keys variables are currently TRue or False to know the current state of those keys. Note that the key_pressed event is only called when the user press the key the first time, if he keeps holding the key the event IS NOT CALLED AGAIN; until he releas the key (then the key_released event is generated) and he press the key again.

So in others words, you just check when the key started to be pressed and when ended to be pressed, no need to keeps cheking the device in between; if you want to known how much time the key was pressed then just store the time in both event hapen and then compare.


good luck,


Share this post


Link to post
Share on other sites
If you're developing under "WinXP/VC++6" anyway, you may want to use DirectInput. It is able to buffer events as they come, so you could process them later (once per game loop, in this case). Every event has a time stamp associated with it.

Quote:
From DX SDK
The dwTimeStamp member contains the system time, in milliseconds, at which the event took place. This is equivalent to the value that would have been returned by the Microsoft Win32 GetTickCount function, but at a higher resolution.


I guess "higher resolution" means to a single milisecond.

Share this post


Link to post
Share on other sites
Quote:
Original post by jra1980
My argument against this is: Imagine that 2 players on 2 different machines (one fast one slow) are playing against each other in some FPS (First Person 3D shooter). They both are facing each other and both *at the same instant in time* fire at each other (press the fire button).

How can they do that? The guy with the fast framerate will see his opponent first. The guy with the slower framerate won't even be able to see the enemy, when he gets shot. So what difference does it make?
You're talking about a purely hypothetical situation, which is only really relevant if there's a psychic link between the players (or, of course, if they can see each others monitors)

But in the normal case, a player won't be able to do anything useful, until the screen updates anyway. You could sample for input 200 times for each frame, but what good would it do? The player wouldn't have anything new to act on until the screen updates, so he wouldn't actually provide any new useful input.

As long as the screen isn't updating, how can the player react? And if the player can't react, how does it matter whether you read input from the player?

Now, just to be clear, it's usually a very good idea to use some kind of fixed timesteps, so the game logic is decoupled from framerate, I'm not arguing against that. This solves a hell of a lot of synchronizatino issues, it allows you to run at any framerate without in any way affecting the gameplay, and so on. Great stuff...
And if you do this, the obvious way to handle input is to sample every timestep, so this does pretty much solve the "problem".

I'm just arguing that how frequently you sample input isn't the important issue. The player can't meaningfully provide input until the game gives him some output to act on. That is, until the game updates the screen.

Share this post


Link to post
Share on other sites
Loop is bad term. It's often sort of finite state machine, and asynchronous on top of that.

Obviously stable amount of Updates per second is important. 100 UPS should be enough. If you are doing flight simulation 1 K UPS should be close to real airplane. (IIRC you can't have it on current windoze (without errors) without blasting out system, or starve other devices.)

I think the most confortable is the most asynchronous type of input processing. (With possibly double/tripple buffered state array.)

Amount of graphic card updates per second is from 14 - 24 (30) for strategy game, to 30 - 60 FPS for something with more movements on the screen. It should be somewhat tailored to the update frequency of the monitor. (or you could use tripple buffering)

Sleeping is important to save notebook battery, and to prevent unnecessary CPU heat.

Note that you should know how your timmer works. If QueryPerformanceCounter would give you ONLY elapsed cycles from last call, CPU could be slowed down, and your application would be shreded to pieces. (Not to mention that multicore/CPU computers could have independent values on each core/CPU and provide wildly varying results.)


Re Spoonbender
Obviously if someone would press left left up, it should be different than up. And processing all key events is bit silly, you are not in office application development.

Share this post


Link to post
Share on other sites
Quote:
Original post by Tac-Tics
Why does your game run at only 25fps? Do you limit it to that? Or are your draw methods just really slow ;-)


He didn't say his program runs at 25fps... He said,
Quote:
"For example on a machine that can render 25fps".


I would honestly lock it at 30-60 updates a second. I forget what Doom3 does but this levels the playing field for the most part. How many people need more then 30 keys hit in a second anyways? I consider myself a fast typer (70wpm+ when just plain text) but no way I can do that in a game :)

This way you also set everything up nicely for networking, no speed hacking etc from my understanding.

Share this post


Link to post
Share on other sites
Quote:
Original post by Mike2343
Quote:
Original post by Tac-Tics
Why does your game run at only 25fps? Do you limit it to that? Or are your draw methods just really slow ;-)


He didn't say his program runs at 25fps... He said,
Quote:
"For example on a machine that can render 25fps".


I would honestly lock it at 30-60 updates a second. I forget what Doom3 does but this levels the playing field for the most part. How many people need more then 30 keys hit in a second anyways? I consider myself a fast typer (70wpm+ when just plain text) but no way I can do that in a game :)

This way you also set everything up nicely for networking, no speed hacking etc from my understanding.

You must not be an FPS player. Speed and accuracy is everything, and I'll be damned if my mouse and keyboard aren't updating as fast as they could possibly be. Past a certain level, I'd like to be sure it's my skill keeping me from defeating my enemies, not my lack of a super-computer to play my games on. Sometimes, extropolation isn't good enough [smile] Sure, there's a level past which you can't (reasonably) go with regards to checking input, but at the very least grant a stable checking rate, without being locked to anything else.

Share this post


Link to post
Share on other sites
You might want to check out GLFW instead of SDL, it uses QueryPerformanceCounter on platforms where it is available or the best alternative.

Usually(atleast in GLFW), input is buffered through the use of callback functions which are triggered everytime any input is recieved. In that callback function, you could check for the current time and use that as your timestamp.

Ideally your game logic should be based on time and not frames. Everything from movement/collission/animation should be based on time. This would not give those running faster computers any advantage in movement speed/firing rate/etc.. If you ever played halflife/cs on a slower computer, you would realise that the game logic updated in the same way on a computer running at 1FPS as compared to a machine running it at 100FPS. If i moved my mouse 2 cm with a sensitivity of x, both would rotate the view equally.

If i moved forward holding the W key, both would move at the same speed, though the one on the slower computer will feel like he is teleporting forward due to choppy movement. This is because even though the main loop runs slower, it will know that inbetween the 2 frames, the W key was not released as a "W release event" was not generated, therefore it would get the time elapsed between the 2 frames and muliply it by movement speed.

player.position += TimeElapsed * moveSpeed;

On the faster computer it would be the same case, but the timeelapsed would be far smaller so the player would move in smaller increments making the movement appear smooth.

I hope this has been of help.

Share this post


Link to post
Share on other sites

Spoonbender:
Quote:

How can they do that? The guy with the fast framerate will see his opponent first. The guy with the slower framerate won't even be able to see the enemy, when he gets shot. So what difference does it make?
You're talking about a purely hypothetical situation, which is only really relevant if there's a psychic link between the players (or, of course, if they can see each others monitors)


Not true, perhaps both players see each other but only realise they are on opposite teams at the exact same moment.
Also, my other argument is that the brain can do a good job of interpolating (predicting) between frames too, so your argument doesn't really hold.

I am not arguing whether coupling sampling to framerate is bad (it certainly is for multiplayer FPS type games, but probably fine for other single player games) I'm asking what is the best/most elegant/usual way around the problem.

Quote:
Ideally your game logic should be based on time and not frames. Everything from movement/collission/animation should be based on time. This would not give those running faster computers any advantage in movement speed/firing rate/etc.. If you ever played halflife/cs on a slower computer, you would realise that the game logic updated in the same way on a computer running at 1FPS as compared to a machine running it at 100FPS. If i moved my mouse 2 cm with a sensitivity of x, both would rotate the view equally.


I understand this, hence in my original post I have:

while(1){
DrawFrame()
GetMouseAndKeyboardState()
GetAccurateTime()
UpdateGameWorldBasedOnTime()
}

GetAccurateTime() and UpdateGameWorldBasedOnTime() are supposed to illustrate this.

Quote:
You might want to check out GLFW instead of SDL, it uses QueryPerformanceCounter on platforms where it is available or the best alternative.


Looks interesting, thanks.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
this is a weird discussion, so i will point out some things:

first thing:
locking input to frame rate is just normal.
you have to determine a time where you capture the input state. and then work with that data in your UpdateGameWorldBasedOnTime() function.
lets say you update your input twice a frame, or in a different thread or whatever. there will be no difference.
consider a players object. it is updated and drawn because the "arrow up" button from player 1 has been pressed. then you receive an input event, that says player 1 releases the button. you will not draw that object again in this frame just because of the state-change. so efficiently you use the key state before drawing starts, no matter if it changes more often.

secondly, every input device has a buffer. e.g. the keyboard. in win32 you have the getasynckeystate() function that tells you what buttons have been pressed/released since the last call of that function.

there is just one reason when it is nice to have a different input thread: mouse moving. so your cursor will move smoothly even when the game lags.

another thing: the "who shoots faster"
1)
lets say you have a multiplayer game. all players have the same framerate when they play on one machine. on different machines, you use networking.
in this case, the server is faster in most cases, even at 25 fps.

2) both players are clients on a dedicated server. ping is the same.
both joined the server. at different times. both players press the button.
who shoots first ? - its the one whose gameloop is nearer to the SendAllTheInputStuffToTheServer()-Function !

the player with the lower frame-rate is always slower, because his game renders the enemy in bigger intervals.
and by the way, mabe his gameloop is right before the UpdateGameWorldBasedOnTime()-Function when the enemy comes out of his cover.
while the other one has just bypassed the rendering of that players model because it was hidden.

long term short,
its not necessary to receive input more than once a frame.

some other thoughts:
QueryPerformaceCounter should be used when available
multitasking could be useful when client and server run on the same machine
the server should update more often than 25fps, mostly no problem, nothing is rendered in that gameloop.

Share this post


Link to post
Share on other sites
I always update my input status arrays right before I do my object modifications. As in, I use DX for input and I have a set of variables that say what the status of my inputs are. Then once per frame, I update those variables. RIght after that update, I do everything that is dependent on the input, such as movement of the player and camera. Then I update everything else that does what it wants to, such as enemies that only depend on where the player is or animations of buildings and particle systems. Then I render everything thing either before or after all of this in the code loop. There isn't any real difference because it is the same order. Since I'm developing with Win32, I use the Performance Timer and do all of my movement based on time, so I never lock my framerate. In this case and this platform, I don't think there is ever a need to. For cell phones, that probably is another story....

Share this post


Link to post
Share on other sites
Although it's a little tougher to do with C++ 6 (compared to Visual C# mind you), wouldn't you be better off to have a timer running that did a redraw callback every so many milliseconds and let the keyboard/mouse raise events? Then, I think, all you really need to worry about is that your redraw doesn't outlive your delay and you slow the rate of input with a timestamp flag or something. Or what's been suggested above, curse my late posting...


My problem with the game loop technique is two fold. Historically, I've been frustrated with "missing" and "hyper" keystrokes. If you're not hitting the key at the precise instant GetKeyStatus(KEY_ID) or whatever is called, tough, it's gone. Or you get stuck with the situation that it reads fast enough to register 5000 keystrokes, setting off that beeping in Windows and ignoring input for the next 3 minutes. That's unless you want to game to stall completely on a blocking console prompt (cin, getchar, scanf, pick your poison), which is what an awful lot of game programming books in the early 90s used in their code.

More recently, the way I was working with AI (admittedly an approach developed using POL for UO) is difficult to shove into a loop. If you consider an AI more as a constantly running thread and less as a series of steps in a state machine, this becomes obvious. Your NPC's behavior is no longer tied to specific implentation details of the engine, interacting only through a small subset of events/functions, and is therefore more natural to write and follow. I found it a little complicated to use a loop combined with anything else and keep it manageable so I'm currently giving the threaded/event model a try.


Of course, take this all with a grain of salt as I've been doing most of my Windows projects in C# for the last 2 years. If the project I've been slowly building manages to work well (not a given, still have lots including SDL.net ahead of me), you'll probably see the source code here...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement