Your analysis is correct. However you're exaggerating how bad it is.
game is stepped forward, displayed, stepped forward twice, displayed, etc. --> the display will appear shacking!
If you read the preferred way of updating the simulation and rendering in Fix your timestep, even in single threaded scenarios, it is possible that if the rendering took too long, the physics will start updating more often than rendering.
In other words this is a problem that appears in single threaded programs as well. It's not shacking, it's frame skipping.
However, I agree that without proper care, the update order can be pretty chaotic, and indeed it will look like it's shacking; which is exclusively a multithreading problem. However, let's see it in more detail:
First, Rendering only needs 4 elements from Logic, if you need more, you should rethink the design:
- Transformation state of every object: Position, Quaternion and scale. That's 40 bytes (64 bytes if you choose a matrix 4x4 representation)
- The playback state of the animation (if animation needs to be sync'ed from Logic). That's anywhere from 0 to 32 bytes
- A list of Entities created in a frame
- A list of Entities destroyed in a frame
Second, forget the idea that you need to render exactly what the simulation has. If your game can avoid that restriction (99% chance it can), you can relax the synchronization.
Third, locks aren't expensive, lock contention is.
Now, creation can be handled without invasive locks: Logic creates a list of entities and at the end of the frame, it locks a lightweight mutex, updates Graphic's list and releases the lock. Chances are, Graphic thread wasn't accessing that list because it has a lot to do. At the end of Graphic's update... it locks, clones the list, and releases the lock.
In both cases, it takes almost no time to work inside the locked resource and it consists of a little fraction of all the work they both have to do, so lock contention is extremely low. (Furthermore you can avoid mutexes entirely using a preallocated space and interlocked instructions, and only lock if the preallocated space got full, but I won't go there)
There's a catch here, and remember my second advise. You don't care that you're rendering exactly what is in the simulation. Suppose Frame A is simulated, and created 3 objects, but Graphics was too fast and looked into the list. Then loops again, uses renders frame A but without those 3 new objects. Do you really care? those 3 will get added in the next frame. It's a 16ms difference. And not a big difference because the user doesn't even know those 3 objects should've been there.
Same happens when destroying objects. Note that a pointer shouldn't be deleted until Graphics has marked that object as "I know you killed the foe, I'm done rendering it"; so that you're sure both threads aren't using the pointer. Only then you can delete the pointer. In other words, you've retired the object from scene, but delayed deleting the ptr.
Otherwise, as you say, a crash will happen.
So in this case, an object may be rendered one more frame that it should. Big deal (sarcasm).
Now we're left into updating position & animation data. You have two choices:
- You really don't care about consistency. Read transformations without any locking at all. Don't care about race conditions. The chance that logic is updating the transform at the same time graphics is reading it is minimal (you should be copying the position from your physics engine to a copy, all inside Logic thread; then reading that copy from graphics thread). If memory is aligned, you won't get NaNs or awkward stuff. But you may get very rare states (it's race conditions after all) for example a position in a very far place from where it actually should be..... but it only lasts for a frame! Chances of this happening often is extremely rare because cloning that transform is very fast even for thousands of objects. So just a flickered frame. Mass Effect 3 is a very bad example of this flickering getting really noticed. They must be updating position from the physics engine data directly, instead of cloning it into a list or they use a memory representation other than a std::vector or plain old array (thus increasing cache misses and time spent iterating), which increase the chances of reading data in an invalid state (I'm telling you an example of an acclaimed AAA game doing this and royally screwing it up).
- You do care about consistency. Use a lightweight mutex when copying the physics transform to another place inside Logic thread, and from Graphics Thread do the same. In other words, is the same as above but with locks. Lock contention is again very low.
I've tried both and #1 really works ok if done properly (don't take my word, try for yourself! it's easy to switch between both, just disable/reenabled the mutexes!).
Note that #1 isn't a holy grail of scalability, because it can still slowdown your loop a lot due to cache sharing and forcing them to flush too often (which only happens if both threads are accessing the same data at the same time with and one of them writes to it).
Same happens with animation, but it's a bit more complex because you really don't want time going backwards in some cases (i.e. when spawning a particle effect at a given keyframe it could spawn twice), I won't go into detail. Getting that one right and scalable is actually hard (but again, solutions rely on the assumption that lock contention will be minimum).
Remember, you don't care that you're rendering the exact thing, but 99% of the time you will, and when it screws up it often gets unnoticed and fixes itself in the next frame.
And remember, synchronizing points 1 to 4 should only be a tiny fraction of what your threads do. Logic thread spends most of it's time integrating physics, a lesser part updating the logic side, and only then syncing.
Graphics spends most of it's time issuing culling, updating derived transform of complex node setups, sorting the render queue, and sending commands to the GPU; and only then syncing.
Note if you read transform state directly from the the Physics engine data you'll have terrible cache miss rates or will have to put a mutex to protect the physics integration step, and that does have a lot of lock contention.
All of this works if there are at least two cores. If the threads struggle for CPU time, then the "quirckiness" when rendering becomes awfully noticeable. Personally, I just switch to a single threaded loop when I detect single core machines. If you've designed your system well, creating both loops shouldn't give you any effort, at all. Just a couple of lines.
And last but not least there's the case where you really care about consistency, and you absolutely care that you're rendering exactly what is in the simulation.
In that case you can only resort to using a barrier at the end of the frame, clone the state from logic to graphics, and continue. If both threads take a similar amount of time to finish the frame, multi-threading will really improve your game's performance, otherwise one thread will stall and must wait for the other one to reach the barrier, and hence the difference between single-threaded version of your game and a multithreaded one will be minimal.
You asked how this is dealt with, and there is no simple answer, because multithreading can be quite complex and there's no simple answer. There are numerous way to deal with it.
A game may put locks for adding & removing objects & updating the transform, another game may not use locks for transforms. Another engine may use interlocked functions to add & remove objects without mutexes.
Another game may just use barriers. Another game may not use this render-split model at all and rely on tasks instead*. There's a lot of ways to address the problem; and it boils down to trading "visual correctness" vs scalability.
*Edit: And it can be like Hodgman described (all cores contribute to task #1, then to task #2, etc) or they may issue commands using older data and process independently (i.e. process physics in one task, process AI independently in another task using results from a previous frame, etc)