multithreading

Started by
16 comments, last by Emmanuel Deloget 15 years, 9 months ago
where exactly do you use threads in game engines? I mean, I read an article about 3dsmax sdk that says a good renderer should support multithreading. In practice if you use a scene manager, isn't it supposed to create a vertex list (created using primitives and other nodes) depending on the API and technique you want to use and send this list to pipeline in one call? and also User interface interaction is a part of this drawing call. so how come you need multiple threads for a renderer?
Advertisement
I believe they say that because the newer CPUs with multiple cores need multithreading in order to be fully taken advantage of.
That's how the development in CPU technology seems to be going - less focus on making them go faster, and more focus on making them run multiple things in parallel. It will probably not be too long until 8 and 16-core CPUs will be mainstream. (Resulting in "a bucketload of threads" as the standard programming practice.)
while (tired) DrinkCoffee();
The rendering pipeline is still strictly single threaded. Even if Dx11 or "was supposed to be GL3.0" would support multithreading, you wouldn't get anything more out of your GPU because the GPU itself will serialize the operations anyhow (actually it could slow down).

Utilizing multiple cores on the other hand is another issue. But that's more useful for non gfx stuff like AI, game logic, physics etc.

Synchronizing it smart however is not simple (so you won't spend more time synchronizing the whole thing than you'd have by going 1thread :D)
In my operating system, there is no graphic card support -- just updates VGA memory directly. I can benefit from multicore. They have to stay out of each other's way, so I made a layer for each core and when it's ready to update VGA memory, it merges the layers using a mask to tell what's been updated. Cached memory is a bitch -- I still haven't mastered how to transfer stuff. There's an instruction "write back and invalidate cache" but doesn't do exactly what I thought. Uncached memory is just too slow -- can't use it. Fortunately, tiny graphic glitches aren't likely to do much harm.

Making something multithreaded hardly makes it multicore!
Quote:Original post by polymorphed
It will probably not be too long until 8 and 16-core CPUs will be mainstream. (Resulting in "a bucketload of threads" as the standard programming practice.)


Intel's next series of CPU archs. comes in 2, 4, 6 and 8 core flavours, each of which supports hyper-threading to allow 4,8, 12 and 16 threads 'active' at once.
Quote:Original post by phantom
Intel's next series of CPU archs. comes in 2, 4, 6 and 8 core flavours, each of which supports hyper-threading to allow 4,8, 12 and 16 threads 'active' at once.


And improved SIMD instruction sets (AVX) are also planned, for even more (albeit somewhat easier to handle) parallel goodness. Much fun(?) lies ahead.

There are several things to do in a modern graphics engine that don't involve submitting data to the GFX card. Second Most high-end renders are software based since hardware isn't 100% accurate.
I believe we are looking at more than 3-5 years before multithreading programming in games becomes more mature. Most programs today are still not good at utilizing even two cores to their full potential. I have seen a few that do quite good job on multiple cores but they are not games.

By then I think we're looking at 16 cores as the norm. But I think it depends on how well they manage to lower the power consumption. Or we might only have 8-16 cores for some time to come.

No no no no! :)
Quote:Original post by polymorphed
I believe they say that because the newer CPUs with multiple cores need multithreading in order to be fully taken advantage of.
While it has never been more true, it turns out that it has been a really long time [in computer years, at least] since even a single core processor could be properly utilized without use of threading.

Anyway, with respect to graphics explicitly, there is a lot more to graphics than a series of render calls. There is updating of animation/particles/etc, object culling, sorting of objects to get better performance out of your draw calls, etc. Lots to do.

Object culling is a trivially paralellized process as soon as you are hitting multiple branches on whatever sort of spacial partitioning tree you are using. So is sorting for that matter. Updating animation and particles can be done as a series of jobs, frequently in parallel with the object culling. So yes, there is a lot of room for parallelism.

Still though, your efforts are better focused elsewhere. Graphics can be done sequentially without much a penalty. Parallelizing physics will earn you more in terms of performance. Parallelizing AI will allow you to actually do some interesting stuff without grinding your game to a halt [by the way, AI is a great multithreading target, and should really be one of the two first things that are picked to focus on, the other being file/network IO].

On the note of future architectures, machines are only going to get more cores. We are pretty much butting up against the Ghz' limit, and can earn a lot more in terms of performance by adding more weak processors instead of making the processors that are already there stronger.
Quote:Original post by MichaelT
I believe we are looking at more than 3-5 years before multithreading programming in games becomes more mature. Most programs today are still not good at utilizing even two cores to their full potential. I have seen a few that do quite good job on multiple cores but they are not games.


But you do not want 100% utilization.

Let's say your physics simulation utilizes your CPU at 100% (4 cores).

Then you put this on 8 core machine, and you get 8x50%.

But here's the real problem. If you *need* 4x100%, then the simulation will not run on anything with 2 cores. Obviously, you cannot lower fidelity, like changing time-step from 100Hz to 50Hz.

With real time, 100% utilization becomes a binary condition. Simulation either runs in real-time, or it doesn't. if anything, the greatest benefit multi-threading can bring is lowering the utilization.

The only thing that can fill the remaining resources is fluff. More particles, for example, but definitely nothing that affects the state of simulation.

Alternatively, you have fixed hardware (consoles). There, you do not need some magic techniques, since you can hard-code the system layout, and lose a lot of fat coming from infinitely scalable design.

But as far as real-time goes, the system either is powerful enough to do the work, or it isn't. But 100% utilization for this purpose is a moot point for the useful work.

This topic is closed to new replies.

Advertisement