Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Game thread synchronization (display & game logic)


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 floatingwoods   Members   -  Reputation: 292

Like
0Likes
Like

Posted 27 May 2013 - 06:16 AM

I am wondering how others handle this situation:

 

You have an application (e.g. a game) that calculates a state (e.g. the game logic) and also displays this one. From what I read it seems that the game logic is most of the time running in a different thread than the display. Which brings me to the question:

 

How is the display synchronized with the game logic? If there is no synchronization, we can have following situations:

 

- game is stepped forward, displayed, stepped forward twice, displayed, etc. --> the display will appear shacking!

- while the game is stepped forward a frame is displayed --> the display can appear "strange" (e.g. the bullet can appear as hitting the game character, but since its state was not yet updated, it will actually not hit it)

- A game character can be removed from the scene during the game logic calculation. If the display happens at that time, there might be a crash.

 

We can synchronize to some extent the 2 threads by locking resources (the last example above can be handled by deferring object destruction). But in order to avoid all above mentionned problems, one should run the 2 threads in alternance (or similar, e.g.  step the game twice, render, step the game twice, render, etc.). Doing so makes the use of 2 threads not interesting anymore, since a single thread would be running at the same speed (more or less) and all the resource locking synchronization would not have to be taken care of: a single thread would be easier and have the same result. No?

 

I guess that the game state must be demultiplied in some way (e.g. all positions would be stored as "current" and "forDisplay", and at the end of a game step, "current" would be copied to "forDisplay", so that the rendering thread would be able to run concurrently).

 

And what happens if the game logic needs to use some OpenGL commands? e.g. to render to FBO and do some simple image processing on that? Then there will be again the need to synchronize the 2 threads in order to be able to correctly switch the OpenGL contexts!

 

Just curious how things are done usually ;)

 

 



Sponsor:

#2 Hodgman   Moderators   -  Reputation: 31785

Like
4Likes
Like

Posted 27 May 2013 - 07:18 AM

From what I read it seems that the game logic is most of the time running in a different thread than the display.

In my experience, it doesn't make much sense to dedicate a whole thread to one small task, such as communicating with the graphics API.

In the engine's I've used, threading is not done in this way; their usage of threads is rotated 90 degrees to this design ;) There is one thread for each CPU core, and every thread contributes to task #1, then they all contribute to task #2, and so on... Usually this is achieved via a shared queue of "jobs" that need to be executed. The threads simply consume work out of this queue, and add collections of work back into it.

 

- while the game is stepped forward a frame is displayed --> the display can appear "strange" (e.g. the bullet can appear as hitting the game character, but since its state was not yet updated, it will actually not hit it)
- A game character can be removed from the scene during the game logic calculation. If the display happens at that time, there might be a crash.

These things should obviously not happen -- they're symptoms from two threads both using the same data set at the same time, which is a race condition!

 

Seeing that your update and render threads solving completely different problems, they don't even need to share much state because the data required by each is different -- render functions don't need "hitpoints" and update functions don't need "triangle count"s. There's nothing wrong with having a "NPC instance" with a position member used by the update thread, which owns a "model instance" that also owns a duplicated position member used by the render thread -- two different problems are best solved with two different data layouts. Don't try and represent everything in one big ball of sphagetti, and then have two completely different processes try and weave their way through it.

The update thread should produce a big blob of data that is consumed by the render thread, containing just the information required for rendering. The update thread should not have access to any data that is only used for rendering, and the render thread should not have access to any data that is only used for updating.

And what happens if the game logic needs to use some OpenGL commands?

Then it should ask the render thread to issue those commands, in the same way that it asks it to issue all the other rendering GL commands. There shouldn't be any real difference in this use case vs 'normal' rendering.

- game is stepped forward, displayed, stepped forward twice, displayed, etc. --> the display will appear shacking!

Often your rendering tasks are designed to run at some fixed display rate, e.g. 30Hz, 60Hz, etc. If so, you've got a time-based target for your updates -- a 60Hz game should try and advance the simulation by 16.6ms worth of 'ticks' before each render. If you're using vsync, then you can make a pretty accurate guess as to when each image will be displayed to the user (1/refresh-rate seconds after the last one), so you want to advance the simulation that far into the future, reliably, to avoid jitters.

This is one reason why I see absolutely no point on putting update/render on their own threads and leaving it up to the OS to make sure each one runs for an appropriate amount of time... You can determine how many updates are optimal for each render, and then perform them serially -- N-cores performing your updates, and then N-cores performing your rendering.



#3 Matias Goldberg   Crossbones+   -  Reputation: 3695

Like
3Likes
Like

Posted 27 May 2013 - 08:29 PM

Your analysis is correct. However you're exaggerating how bad it is.

game is stepped forward, displayed, stepped forward twice, displayed, etc. --> the display will appear shacking!

If you read the preferred way of updating the simulation and rendering in Fix your timestep, even in single threaded scenarios, it is possible that if the rendering took too long, the physics will start updating more often than rendering.
In other words this is a problem that appears in single threaded programs as well. It's not shacking, it's frame skipping.
 
However, I agree that without proper care, the update order can be pretty chaotic, and indeed it will look like it's shacking; which is exclusively a multithreading problem. However, let's see it in more detail:
 
 
First, Rendering only needs 4 elements from Logic, if you need more, you should rethink the design:

  • Transformation state of every object: Position, Quaternion and scale. That's 40 bytes (64 bytes if you choose a matrix 4x4 representation)
  • The playback state of the animation (if animation needs to be sync'ed from Logic). That's anywhere from 0 to 32 bytes
  • A list of Entities created in a frame
  • A list of Entities destroyed in a frame

Second, forget the idea that you need to render exactly what the simulation has. If your game can avoid that restriction (99% chance it can), you can relax the synchronization.
 
Third, locks aren't expensive, lock contention is.
 
Now, creation can be handled without invasive locks: Logic creates a list of entities and at the end of the frame, it locks a lightweight mutex, updates Graphic's list and releases the lock. Chances are, Graphic thread wasn't accessing that list because it has a lot to do. At the end of Graphic's update... it locks, clones the list, and releases the lock.
In both cases, it takes almost no time to work inside the locked resource and it consists of a little fraction of all the work they both have to do, so lock contention is extremely low. (Furthermore you can avoid mutexes entirely using a preallocated space and interlocked instructions, and only lock if the preallocated space got full, but I won't go there)
 
There's a catch here, and remember my second advise. You don't care that you're rendering exactly what is in the simulation. Suppose Frame A is simulated, and created 3 objects, but Graphics was too fast and looked into the list. Then loops again, uses renders frame A but without those 3 new objects. Do you really care? those 3 will get added in the next frame. It's a 16ms difference. And not a big difference because the user doesn't even know those 3 objects should've been there.
 
Same happens when destroying objects. Note that a pointer shouldn't be deleted until Graphics has marked that object as "I know you killed the foe, I'm done rendering it"; so that you're sure both threads aren't using the pointer. Only then you can delete the pointer. In other words, you've retired the object from scene, but delayed deleting the ptr.
Otherwise, as you say, a crash will happen.
So in this case, an object may be rendered one more frame that it should. Big deal (sarcasm).
 
Now we're left into updating position & animation data. You have two choices:

  • You really don't care about consistency. Read transformations without any locking at all. Don't care about race conditions. The chance that logic is updating the transform at the same time graphics is reading it is minimal (you should be copying the position from your physics engine to a copy, all inside Logic thread; then reading that copy from graphics thread). If memory is aligned, you won't get NaNs or awkward stuff. But you may get very rare states (it's race conditions after all) for example a position in a very far place from where it actually should be..... but it only lasts for a frame! Chances of this happening often is extremely rare because cloning that transform is very fast even for thousands of objects. So just a flickered frame. Mass Effect 3 is a very bad example of this flickering getting really noticed. They must be updating position from the physics engine data directly, instead of cloning it into a list or they use a memory representation other than a std::vector or plain old array (thus increasing cache misses and time spent iterating), which increase the chances of reading data in an invalid state (I'm telling you an example of an acclaimed AAA game doing this and royally screwing it up).
  • You do care about consistency. Use a lightweight mutex when copying the physics transform to another place inside Logic thread, and from Graphics Thread do the same. In other words, is the same as above but with locks. Lock contention is again very low.

I've tried both and #1 really works ok if done properly (don't take my word, try for yourself! it's easy to switch between both, just disable/reenabled the mutexes!).
Note that #1 isn't a holy grail of scalability, because it can still slowdown your loop a lot due to cache sharing and forcing them to flush too often (which only happens if both threads are accessing the same data at the same time with and one of them writes to it).
 
Same happens with animation, but it's a bit more complex because you really don't want time going backwards in some cases (i.e. when spawning a particle effect at a given keyframe it could spawn twice), I won't go into detail. Getting that one right and scalable is actually hard (but again, solutions rely on the assumption that lock contention will be minimum).

Remember, you don't care that you're rendering the exact thing, but 99% of the time you will, and when it screws up it often gets unnoticed and fixes itself in the next frame.
 
And remember, synchronizing points 1 to 4 should only be a tiny fraction of what your threads do. Logic thread spends most of it's time integrating physics, a lesser part updating the logic side, and only then syncing.
Graphics spends most of it's time issuing culling, updating derived transform of complex node setups, sorting the render queue, and sending commands to the GPU; and only then syncing.
 
Note if you read transform state directly from the the Physics engine data you'll have terrible cache miss rates or will have to put a mutex to protect the physics integration step, and that does have a lot of lock contention.
 
All of this works if there are at least two cores. If the threads struggle for CPU time, then the "quirckiness" when rendering becomes awfully noticeable. Personally, I just switch to a single threaded loop when I detect single core machines. If you've designed your system well, creating both loops shouldn't give you any effort, at all. Just a couple of lines.
 
And last but not least there's the case where you really care about consistency, and you absolutely care that you're rendering exactly what is in the simulation.
In that case you can only resort to using a barrier at the end of the frame, clone the state from logic to graphics, and continue. If both threads take a similar amount of time to finish the frame, multi-threading will really improve your game's performance, otherwise one thread will stall and must wait for the other one to reach the barrier, and hence the difference between single-threaded version of your game and a multithreaded one will be minimal.
 
 
You asked how this is dealt with, and there is no simple answer, because multithreading can be quite complex and there's no simple answer. There are numerous way to deal with it.
A game may put locks for adding & removing objects & updating the transform, another game may not use locks for transforms. Another engine may use interlocked functions to add & remove objects without mutexes.
Another game may just use barriers. Another game may not use this render-split model at all and rely on tasks instead*. There's a lot of ways to address the problem; and it boils down to trading "visual correctness" vs scalability.

 

*Edit: And it can be like Hodgman described (all cores contribute to task #1, then to task #2, etc) or they may issue commands using older data and process independently (i.e. process physics in one task, process AI independently in another task using results from a previous frame, etc)


Edited by Matias Goldberg, 27 May 2013 - 08:33 PM.


#4 floatingwoods   Members   -  Reputation: 292

Like
0Likes
Like

Posted 28 May 2013 - 06:33 AM

Hodgman and Matias,

 

Thanks a lot for the very clear and exhaustive explanations. The links you mentionned were also helpful.

To give a little bit more background: I used to get inspired a lot on the gamedev forums, I however am more in the field of simulation. There, it is of importance if something gets rendered wrongly, or if a frame gets skipped without specific reason. Interpolation between two states could work, but might lead to some unrealistic renderings too and confusion, specially when stepping generated videos later on (usually there is one frame per simulation step, which helps debugging certain set-ups). Finally, the simulation (or game logic) uses the openGL functionality, in order to generate virtual images, operate on them (e.g. image processing) and create an output. The time at which the "internal" or FBO rendering occures depends on the simulation loop and how it is programmed. So there I get another heavy restriction regarding multithreading: the rendering thread and "game logic" thread can both generate OpenGL commands, and need to switch contexts every time. In that case, locking (or rather blocking) the other thread is the only option.

Given the many constraints and limitations, I concluded that having an additional thread in charge of rendering would not give me much speed increase, but would complicate drastically the architecture.

My application uses basically one single thread (of course it also uses worker threads for specific tasks) that handles the "game logic" and the visualization. But I wanted to evaluate the benefits of spliting the task into 2 different threads, and maybe even offering the 2 alternatives.

 

Again thanks for the insightful replies!



#5 Matias Goldberg   Crossbones+   -  Reputation: 3695

Like
0Likes
Like

Posted 28 May 2013 - 12:51 PM

I see you intent issueing OpenGL calls from multiple threads. As you said, this is a very bad idea, and I personally avoid them due to lots of issues in the past; unless you keep 100% independent GL context for each thread.

Otherwise switching contexts is so error-prone (and driver bugs may appear, to be honest) that any performance gain you intend to get from going multithreading is going to be nullified, or worse; just leave it single threaded.

 

If your logic needs to issue rendering calls, you're not abstracting enough rendering from logic. If you're on a tight schedule, well ok; but if you've got the time, take your time to rethink how the systems relate; and whenever logic needs something from OpenGL, it requests to the render thread, and periodically check for results arrived.

 

For what you describe, your project appears to involve a lot of image processing from what has been rendered already (am I right?) in that case, since your game logic is rendering, and could not be decoupled, you should go single-threaded and rely on a method more like what Hodgman described (map buffer from GPU to CPU, then issue N threads to work on the received image, wait for all of them to finish) for multithreaded approaches.


Edited by Matias Goldberg, 28 May 2013 - 12:52 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS