Multithreading for Games

Started by
17 comments, last by InvalidPointer 12 years, 2 months ago
Hi everyone!

Is multithreading nessecary for real time 3d games, because I´ve always done it in the way to call update and paint in a loop without threading.

So my question: Do I need to/should use multithreading or is it better without it?

Thanks in advance!
Advertisement
At the very minimum, multithreading is essential for fast resource loading as otherwise you'll be stuck with linear processing, which has no chance of utilizing the already slow disk access speeds at full capacity. Once you get to the territory where your scenes require massive per-task processing (physics, multiple occlusion queries, complex sound processing, etc), multithreading becomes the only viable option to maintain a solid framerate at high quality. Generally speaking, the number of threads you have should be equal to the number of processors that are available, each one should be able to process any job (except for certain limitations, eg the draw pipeline being bound to a specific thread whom the context belongs to), your architecture should be conceptually parallel (state/job-based tasks where each task can be processed by any thread). This isn't that difficult to do actually - all you need is a boolean (job ready yet?) response system (no mutexes, critical sections etc.!) backed up by a main loop for job distribution.

As I noted above, fundamentally, for simpler games, if you have any respectable amount of disk access, you, at the very minimum, should have a disk access thread that processes all disk-read jobs and queues completed read jobs for another thread for parallel processing while it processes the next disk read task.

A flag-based response system is based on the concept that any one thread AND ONLY THAT THREAD that is responsible for processing a task within the scope of a particular job can write into the memory space allocated for that particular task (parallel reads can still occur). Once the update completes, the thread sets the completion flag for the task. In the meantime, the main thread keeps polling all pending tasks for completion (by reading a boolean flag) and passes the task on to the next processing phase when the previous one completes. Consider three linearly dependent jobs that are distinct, but cannot be executed out of order:

TASK
job1: read file from disk (thread1)
job2: create alpha channel or whatever (thread2)
job3: copy texture to GPU (draw thread)

All of these steps can be parallelized for multiple load operations. The real benefit kicks in once you realize that job1 can do much faster reads since all reads are more likely to be sequential (other applications less likely to interrupt disk heads) and you're effectively making use of HDD access (which is the slowest link in the sequence) 90+% of the time, effectively winning the time spend on job2 and job3.

Any further parallelization really depends on your game and the complexity you need. Overall I think a task-based system is a simple and elegant solution that will scale automatically the more cores/threads the system has while still having the ability to fall back to a single core/thread if only one is available.

So my question: Do I need to/should use multithreading or is it better without it?

Have you run into a performance wall with single-core, where you simply can't make your game run at interactive frame-rates?

If so, then you need to multi-thread your game. If not, then it might be fun as an academic exercise, but it's far from essential.

Correct multi-threading is hard, it's easy to introduce difficult-to-track-down bugs unintentionally, etc. so don't do it unless you feel there is a distinct benefit to be had.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]


As I noted above, fundamentally, for simpler games, if you have any respectable amount of disk access, you, at the very minimum, should have a disk access thread that processes all disk-read jobs and queues completed read jobs for another thread for parallel processing while it processes the next disk read task.
I'd say "for simpler games" there's no need for multithreading at all. Unless perhaps on Atoms, but even those are going to run "simpler games" at interactive rates, albeit with some sweat.

Have you run into a performance wall with single-core, where you simply can't make your game run at interactive frame-rates?

If so, then you need to multi-thread your game.
Seconded. Given today's processors, I hardly believe a beginner can saturate them. Of course one can always do things wrong in the first place.

Previously "Krohm"

[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

Generally speaking, the number of threads you have should be equal to the number of processors that are available[/quote]

[/font]

Actually you may want more threads than cores. Consider a single-core cpu. It can still benefit from having multiple threads, especially if each thread is likely to block on something. If you have a single core, and 4 threads:

- game logic & render thread : constantly runs, 'interrupted' by occasional work in the other threads
- sound thread : will periodically do a 'spurt' of work processing audio every so often. most of the time it's waiting
- network thread : will spend most of its time blocked, waiting on a socket
- file thread: will spend most of its time either waiting for the game to ask to load something, or blocked on a slow disk operation

On a modern OS, when a thread is blocked on something, such as waiting for the disk to read something, the kernel will set that thread aside, and run something in the another thread. The above design already benefits a little bit from having a multi-core processor, but isn't ideal. Since most of the work is still happening in the game logic/render thread, it could be further broken out like this:

-Render thread (just 1. OGL doesn't like threads, DX might not either)
-file thread (just 1; don't thrash the disk)
-network thread (1 per socket, but the client will probably only have 1 socket open anyway)
-sound thread (just 1)
-game logic thread (1 or many)

There is a bit of a difference now. Before the game logic & render thread alternated between advancing the game state, and rendering. You could just lock the game state, and take turns processing the next gamestate and rendering. At a first glance, it doesn't need any better than just having them share a thread, except with multiple cores, you could alternate between having 4 cores process 4 game state threads, and then have 1 core render, 4 cores process game state, and 1 core render, etc. This starts to reap the benefits of having multi core; but now what are the other cores doing while you are rendering? Probably not much. This is the point where you have to do one of two things: 1) break the game state into pieces, so you can start rendering one part of the gamestate while you begin to compute the next part or 2) double-buffer the gamestate (or at least just the parts that affect rendering), so that you can have 3 cores processing the NEXT game state while you have the 4th core render the PREVIOUS gamestate.

But, given the scope of most hobby projects, just breaking out super-slow things (like disk access), and the most bottlenecked calculations (physics etc) into seperate threads will give you the most benefit with the least effort (and bugs), which is more important than trying to achieve 100% cpu temperature.

@OP - if you're just a hobbyist learning how to make 3D games, I would not recommend trying to make a multi-threaded one first. It adds a lot more things to learn, and probably isn't necessary unless you really, really need 100% CPU utilisation from a quad-core...

Actually you may want more threads than cores. Consider a single-core cpu. It can still benefit from having multiple threads, especially if each thread is likely to block on something. If you have a single core, and 4 threads:
- game logic & render thread : constantly runs, 'interrupted' by occasional work in the other threads
- sound thread : will periodically do a 'spurt' of work processing audio every so often. most of the time it's waiting
- network thread : will spend most of its time blocked, waiting on a socket
- file thread: will spend most of its time either waiting for the game to ask to load something, or blocked on a slow disk operation
Generally speaking, your threads should not be likely to block on things. You should use non-blocking file-system and networking APIs, which removes the need for them to have their own threads (at a low level, these things are implemented asynchronously, using efficient methods such as DMA etc. The blocking APIs are a wrapper around things that are natively asynchronous, so if you then wrap up the blocking APIs with threads to make them asynchronous again, you've succeeded in adding two redundant layers of inefficiency to an operation that cancel each other out for zero benefit).
Also you usually don't create your sound thread yourself; it's usually created internally by your sound library and controlled via a non-blocking API.
So this just leaves the need for you to create your logic/rendering threads, which should simply just be your "one-thread-per-core general job/task processing" threads, which can run any logic/rendering/etc job (with rendering-submission jobs restricted to a specific thread if required).
On a modern OS, when a thread is blocked on something, such as waiting for the disk to read something, the kernel will set that thread aside, and run something in the another thread.[/quote]On windows, it also (by default) wont wake that blocked thread up for at least 15ms, even if it only needs to block for 0.5ms, which isn't very good for a real-time system, hence you should avoid blocking.
At the very minimum, multithreading is essential for fast resource loading...you have any respectable amount of disk access, you, at the very minimum, should have a disk access thread that processes all disk-read jobs and queues completed read jobs for another thread
As above, just use the OS's native asynchronous file APIs. You only need a background loading thread if you've got long-running post-load CPU work to do, such as LZMA decompression or XML parsing.
On windows, it also (by default) wont wake that blocked thread up for at least 15ms, even if it only needs to block for 0.5ms, which isn't very good for a real-time system, hence you should avoid blocking.


This is really more for my sake than anything, but I was under the impression that this tidbit only held for explicit calls to Sleep(), and things like WaitForSingleObject() and friends can potentially be more responsive?
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.
Beside everything which has been said, there're two things I want to add.
1. Even with a single core CPU you already use a multicore system, that is CPU + GPU. You can utilize this by filling up the command queue of the GPU and then do some other tasks on the CPU. Until you don't use up this buffer (GPU is running, while CPU is idling) you don't need any multicore support (beside resource processing/loading).

2. When you want to add multicore support, you should start with it first. It is incredible hard to add multicore support later, some design decisions can easily steal the show (after writing 1000s of lines of gamelogic code in a scripting language like lua, it is somewhat demotivating to see, that lua doesn't support multicores in a single VM).
This is really more for my sake than anything, but I was under the impression that this tidbit only held for explicit calls to Sleep(), and things like WaitForSingleObject() and friends can potentially be more responsive?
It doesn't matter how you transition a thread into a sleep state - once in that state, the kernel's scheduler won't come around and poke it back into a ready state until the next scheduling cycle, which defaultsto15ms on windows IIRC.
I actually should've said "up to 15ms", because if you go to sleep right before the next tick, you'll get woken up quickly (i.e. somewhere between 0ms to 15ms). However, if there's too many threads, a ready thread might not be chosen to execute in any given 15ms tick, other threads can cause starvation. After a thread has starved for 3-5 seconds, the kernel will give it a priority boost to ensure it gets to run for at least one tick. So any time you block (with default settings), you're sleeping for a best case of 0-15ms, and a worst case of ~4 seconds.

On a side note, [font=courier new,courier,monospace]Sleep(0)[/font] is allowed to immediately return instead of sleeping for a tick (which is usually bad), if thread priorities allow for it.

It doesn't matter how you transition a thread into a sleep state - once in that state, the kernel's scheduler won't come around and poke it back into a ready state until the next scheduling cycle, which defaultsto15ms on windows IIRC.

Hmmm.. I use Sleep(1) in my multicore support code, after reading more about this topic it seems, that you can utilize the timeBeginnPeriod/timeEndPeriod to increase the timer resolution. As far as I understand, this will affect the whole OS (OS will take the highest resolution). Therefore I would try to encapsulate the sleep like this:

timeBeginPeriod(1);
Sleep(1);
timeEndPeriod(1);

Is this a bad idea (too much overhead, slowing down the OS) ?

This topic is closed to new replies.

Advertisement