Responsive UI in compute-heavy games

Started by
9 comments, last by BitMaster 8 years, 2 months ago

Say, you're working on a game with a complex simulation or lots of AI. Often in such games, the UI stalls while the game catches up. I see this a lot in loading screens too; the game (including the spinner or whatever other "now loading" hint the game provides) freezes while the application loads some file, presumably doing something CPU-bound.

Why is that, and is there a way to avoid it? Being a server engineer, I don't work much with UI-driven applications, but I know that your bog-standard desktop application, UI runs on its own thread and business logic runs on another (or several), with input being disclosed via events or some similar messaging construction. Can this threading model be applied to games like the above?

I've done a few simple single-threaded games like side-scrollers and text-based stuff in the past, but I'm fairly ignorant when it comes to the threading models of bigger affairs.

Advertisement

Often in such games, the UI stalls while the game catches up. [...]
Why is that, and is there a way to avoid it?

Because it's the easiest to implement, and often is "good enough".

I know that your bog-standard desktop application, UI runs on its own thread and business logic runs on another (or several), with input being disclosed via events or some similar messaging construction.

As far as I'm aware, even with desktop applications, this isn't the case by default. You can make desktop applications multithreaded, but I don't believe they are by default.

I'm pretty sure both Win32 and Qt are singlethreaded unless you (as the programmer) choose to spawn more threads, but I might be mistaken about that.

Can this threading model be applied to games like the above?

Yep.

Actually, GUIs in games are usually alot simpler than desktop GUIs, and so where desktop GUIs cache alot of the rendering to avoid as much redrawing as possible, games usually just redraw everything every frame and only cache computationally-expensive things like text-rendering, because games usually have smaller (and simpler) widget hierarchies.
This isn't always the case, though. Take MMOs or larger RPGs for example: They have complex widget hierarchies that may need more caching then the average game.

If needed, you can multithread your UI in games as well. But (not being in the game industry myself) I'd guess, you'll just keep your UI on the same thread with the rest of your rendering, and instead spawn threads (or tasks running on worker threads) for whatever the "computationally expensive" thing that you're expecting to slow down the main thread. Slow AI and pathfinding and level geometry streaming can be handled on the separate threads, instead of the (relatively) fast UI code.

D3D9 (and OpenGL without a lot of driver-specific hackery) force you to perform almost all of your graphics resource management on a single thread, which also be the one that created your window and the one that polls Windows for input messages. Creating too many large resources per frame will block this thread and ruin your graphical framerate :(

D3D11 partially fixed it, where now any thread can perform graphics resource management. However, the D3D back-end still runs on a single thread, and may act as a bottleneck. If this back-end thread stalls, then your "main" thread (the one performing backbuffer flips, etc) may block until it catches up... So you still have to be careful how much graphics work you do 'per frame' on a loading screen, even from background threads.

D3D12/Vulkan should finally fix this issue.

There's also non-graphics related issues. If for whatever reason, the game is doing deserialization of large complex/compressed file formats on the "main thread", then that will impact the loading screen. Some game engines actually do a lot of wasteful deserialization during loading...


D3D9 (and OpenGL without a lot of driver-specific hackery) force you to perform almost all of your graphics resource management on a single thread, which also be the one that created your window and the one that polls Windows for input messages. Creating too many large resources per frame will block this thread and ruin your graphical framerate

I'm less interested in graphics stalls here than CPU-oriented stalls, like how Civilization 5 locks up when the game is processing AI turns. Certainly, if your rendering is bottlenecked, you're kind of screwed.


There's also non-graphics related issues. If for whatever reason, the game is doing deserialization of large complex/compressed file formats on the "main thread", then that will impact the loading screen. Some game engines actually do a lot of wasteful deserialization during loading...

This has always confused me too. Why on earth would you ever want to load on the main thread? Certainly, if you properly pack/format your resources, you'll be able to do everything in asynchronous IO land anyway, but it seems to me that a responsive UI should be something sacred when you're writing any consumer-oriented software.

The simple answer is that everyone still sucks at writing multithreaded software. Games only took it seriously 10 years ago when Sony/MS released multicore systems. Universities still largely suck at teaching it -- introducing the shared memory paradigm and then stopping, as if that's the big picture. The number of junior engineers who answer with "Uh, use a mutex?" when asked how to make multithreaded game loops is embarrassing laugh.png The message passing paradigm is supposed to be the default choice, but isn't anywhere near common enough in the programming zeitgeist.

Why on earth would you ever want to load on the main thread?
...
Certainly, if your rendering is bottlenecked, you're kind of screwed.

I mentioned one reason above -- D3D/GL have traditionally forced you to load graphical resources on the main thread. I wasn't talking about the GPU at all, above, just how the API forces CPU-side work to be done on certain threads.

Traditionally games and game engines have all been single-threaded too. You'd think that 10 years after Sony/MS forced multicore onto us we'd have adapted to that... but a damn lot of games are still written with a single-threaded mindset. Shitty threading support in extension languages like Lua doesn't help... My last company wrote all their gameplay code in Lua, which mean it was stuck on one thread, relying on the engine to magically try and push the heavy lifting onto all the other cores somehow.

Most engines now are based around the model of there being a singular "main thread", accompanied by N "worker" threads who's logic is basically "while(1) { Job j; if( g_jobs->pop(j) ) j.Run(); }". The main thread then tries to break up any parallel workloads into jobs, and hands them off to the workers. Some things are easy to port to this model, but anything dealing with very large and complex data structures (with a lot of synchronization, or random write locations) is a lot harder. Loading a game world and populating it with disparate entity types might fall into that latter category for a lot of games/engines.

Yeah, the actual part of streaming the bytes from disk into RAM should almost universally be asynchronous by now, but many engines still have massive amounts of deserialization work to do after loading the data, which may have stupid amounts of dependencies across multiple game systems. In our engine, we try to deal with this by keeping the on-disk and in-ram resource formats identical where possible, removing the need for most deserialization tasks, and breaking any remaining instantiation/deserialization work into clear phases so that dependencies between threads can be easily resolved and scheduled.

There's also a lot of different ways to design a main loop for a game. Often the simulation loop and the rendering loop are tied together somehow -- e.g. they're the same loop. The simplest loop runs one simulation step, followed by one render step. If either of them go over their time budget, the framerate suffers.

Different games/genres will have different types of loops. e.g. in my current game, simulation is partially decoupled from rendering, but a long simulation frame will still block the next graphics frame... This isn't an issue for me usually, as simulation is so fast that I often run 2+ simulation updates at a time, followed by one rendering update! However, this breaks down on loading screens if any one asset takes 10+ms to deserialize... causing that frame to go over budget.

[stuff]

Ah, I see. Thanks for the information. I hadn't considered the aspect of loading graphical resources being stuck on the rendering thread. It was baffling me, because if I had to do something like run a few hundred megabytes of data through zlib and into RAM I'd do it all on another thread and just pass a pointer to the memory back.

The model you describe makes much more sense, and it's a bit uplifting since I've been treating games as these weird special snowflakes because of their interactive nature and the eccentricities of the graphics APIs. Knowing that most engines do it pretty much the same way as 90% of multithreaded software out there cures me of my analysis paralysis. :)

D3D9 (and OpenGL without a lot of driver-specific hackery) force you to perform almost all of your graphics resource management on a single thread, which also be the one that created your window and the one that polls Windows for input messages.


That's not true for the input message part.
I use a seperate thread for input and another for OpenGL since 20 years without issues for an editor application on Windows.


The simple answer is that everyone still sucks at writing multithreaded software.

This is me. Anybody got any advice to improve? Any articles, tutorials, blog posts?

And more on topic I also was interested in multithreaded architecture for responsive UI's, any articles or tutorials on that?

-potential energy is easily made kinetic-

Anybody got any advice to improve?


Practice makes perfect!

Threading is a beast, and there is lots of documentation and papers on it. Most of it is from the perspective of server side development and scaling a server for many requests, and has to be repurposed for gamedev though...

It's a shame that BeOS isn't around any more. I learned C++ on that OS that forced you to multithread everything. It was a great way to learn, and to make all (or a lot of) the possibles mistakes

Definition of a man-year: 730 people trying to finish the project before lunch

This topic is closed to new replies.

Advertisement