• # Keyboard I/O Considerations in Game Development

General and Gameplay Programming

This article has to do with a number of games in the market that use the keyboard as the primary controller. Categories would include racing games where the user lacks a joystick or a wheel.

Before I go on with this article, a few points have to be covered. This article assumes that you have a basic knowledge of I/O terms like polling and have a basic knowledge of operating systems.

The life of computer games is much more than that of console games and definitely more than the programmer expects it to be. There are definitive groups of people who are not ready to give up their passion for the likes of games like Doom or other games of that generation.

Although this is a good factor, the downside here is that games do not get modified according to current technological standards. The evolution of new hardware and operating systems makes computers get faster by the month, thus increasing the speed of execution of a single instruction or system call. This is the root cause of the problem I'll describe and address here.

Although game programmers have realized the importance of synchronizing graphics to a time scale, keyboard I/O is generally considered too trivial for that kind of attention. Most games that do not use the DirectX family of API's generally poll the keyboard using a system call like GetAsyncKeyState(). Many games use multiple threads and so a sample game would have threads like this

The problem here is that as computer speed increases the speed of polling in the I/O thread increases. If a game in 2000 could obtain 10 keyboard inputs per frame, in 2004 it may obtain 20, considering the increase in processor speed and the advancements in operating systems. Now if this polling was used to determine the acceleration of a car, the players car would simply accelerate faster, just because of a faster processor, whereas the AI is limited by values, they respond at the same rate that they did at the time when the game was designed.

The real problem comes up when a game originally written in say 2000 is run in multiplayer mode. One player is running a 500MHz processor and the other is a gigahertz machine. Since weapon fire is controlled by individual machines, in a deathmatch mode game the gigahertz machine has faster polling times and hence that player can dish out more bullets per second, thus giving her an unfair advantage. Although people may think this problem does not exist, the source of this article was a Quake III deathmatch, where players on faster machines always seemed to win more easily. In fact when shooting head to head, the one with the slower machine will always lose. Try it!

To solve this problem, I have a simple conceptual solution that can be implemented platform independently.

Let us define two threads Render and Poll and one semaphore sem.

The pseudo code for Render would look like:

void Render(...)
{
//called whenever the window receives a repaint message
Signal(sem)
...Do Rendering...
}


The pseudo code for Poll would look like

void Poll(...)
{
integer counter=0;
while(1)
{
GetKeyboardInput();
EnterCriticalSection();
...Update Critical Section (shared by Render & Poll)
LeaveCriticalSection();
counter = counter+1;
if counter > MAX_INPUTS_PER_FRAME
then
counter=0;
Wait(sem);
endif
}
}


For those not familiar with semaphores, the functions Wait and Signal do the following:

Function Wait() - makes the thread wait until someone else runs a signal()
Function Signal() - Releases any waiting thread

What does this code do? Basically first the Polling thread is free to execute until it reaches the maximum input limit per frame. When this is reached the counter is reset and the wait() locks up the thread, thus disregarding further inputs from the user. The Poll thread is unlocked only when the next call to render is made. This process keeps continuing and hence will successfully reduce the amount of inputs per generated frame.

This approach is more conservative in the sense that it makes the faster machine under-perform. A more radical approach would be to scale up the number of inputs of the slower machine to that of the faster one. This can be implemented in the following way:

1. Obtain the maximum inputs per frame of the faster user.
2. Obtain the maximum inputs per frame of the slower user.
3. Set a threshold level for the slower user after which scaling begins. This is done to avoid single taps being scaled unnecessarily.
4. Once the threshold is crossed, scale the inputs by a linear or exponential function of the distance from the maximum.

This should in concept scale up the slower machine to the level of the faster one.

Although professional game makers may have fixed this issue a long time ago, it still appears in some of the games that I come across. Even if it does not occur anymore, it is still an important issue in the game development cycle, one that budding game writers should sit up and take notice of. Leveling out the playing field is an important factor, and should be done, because the days when games sold only because of pretty graphics are gone, and gameplay is the main issue now.

Authored By Rakesh Iyer
Second Year Computer Engineering Student from India.

Report Article

## User Feedback

There are no comments to display.

## Create an account

Register a new account

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 0
• 4
• 0
• 0
• 0

• 11
• 11
• 15
• 11
• 11
×