Atomically Update Vector3D In Win32

Started by
6 comments, last by Antheus 12 years, 2 months ago
Hi. This is my first post, but I've been following the forums for some time. I'm currently making the transition from XNA to C++ (various libs). I have successfully ported several projects, but my reason for making the change was for the performance benefits. I've been reading a number of posts/tutorials/presentations on data oriented design (notably Mike Acton's posts). This lead to a small performance boost in a single threaded environment, but now I want to multi-thread my task scheduler. I was hoping to make use of a lock free approach (similar to the gateway approach outlined here: http://macton.smugmug.com/gallery/8611752_9SU2a/1/568079120_gzhk8#!i=568079120&k=gzhk8) but I'm struggling.

Specifically, I am currently working on my physics system. I have one task that updates forces (simple vec3 PODs) by applying drag based on the current velocity. Likewise I have a collision resolution task. How can I atomically update the vector3D's on an x86? I was considering storing a queue in the gateway of requested changes, but this still wouldn't avoid one thread reading a vector while the gateway was midway through applying an update. Is the gateway approach unsuitable for non-PS3 games?
Advertisement
I'm don't know of any way to update 3 32-bit values atomically. You could however store a pointer to the Vector3D, and then change the pointer atomically.
Thanks Evil Steve. I'd thought of that before, but my original approach still had a race condition (I was using one array of data and 2 arrays of pointers - one of active but read only data and the other write only but mutable). But I may be able to overcome it using tagged pointers. I'll try it out and post some code shortly.
Specifically, I am currently working on my physics system. I have one task that updates forces (simple vec3 PODs) by applying drag based on the current velocity. Likewise I have a collision resolution task. How can I atomically update the vector3D's on an x86? I was considering storing a queue in the gateway of requested changes, but this still wouldn't avoid one thread reading a vector while the gateway was midway through applying an update. Is the gateway approach unsuitable for non-PS3 games?[/quote]

Don't update a vector3D atomically.

Atomicity, especially on multi-core depends on cache lines. The problem you should be worried about is false sharing. When two cores work on values sharing the same line, each write causes a cache sync, stalling both of them.

For physics, you have two arrays. Vector3D old_position[];
Vector3D new_position[];
// similar for other values, such as velocity perhaps
To apply drag, read from old_position (old_velocity), compute new values, write them to new_position (at same index).

Hand each thread its own chunk that is a multiple of cache line. Exact values differ, but for OpenMP I've usually found that for large number of elements a chunk size of 64k or thereabout works best.

Gotchas: Vector3D must be struct and they must continuously allocated, hence array.

Above solution is optimal for multi-core update.

If using more complex integration, such as RK4, have more buffers, one for each partial update.

Likewise I have a collision resolution task.[/quote]

Same solution as above, use old_position.
Thanks Antheus.


Atomicity, especially on multi-core depends on cache lines. The problem you should be worried about is false sharing. When two cores work on values sharing the same line, each write causes a cache sync, stalling both of them.


I see! That also may explain why my experiments haven't shown anything close to the performance increase I was hoping for (from profiling I know that the physics systems are the bottleneck). I only recently started reading about processor architectures and caching issues, so I'm still pretty new at this. I'll work what you've said into my current experiment and post some code later.
And forget about that article for the time of being. It's about tweaking the weight of the color of oil gauge on a F1 car. You don't even have a car yet.

(from profiling I know that the physics systems are the bottleneck[/quote]

If they are, threading won't solve it.

Physics needs to run at fixed step on lowest supported hardware. Anything above cannot do more or different work, or it will impact the result. Imagine FPS collision tests depending on how fast of a CPU someone has.

And even if threading is applicable, it's mostly about batching and granularity. A single frame might have a few dozen tasks at most.

In many cases overhead is a killer. There are only 2/4 cores, offering at most that much more power. But going from a local call (1 cycle) to a queue/dispatch (12 cycles under GCD, can be thousands of cycles) means a lot of overhead. It's very easy for multi-threaded solution to peg all cores, but do less work than a single threaded version would.
Interesting. I had assumed that the threading overhead would be low enough that splitting the update integration and the collision systems to different cores would make sense. I'm still interested in learning about lock-free and data orientated approaches though (so that I know what to look for if/when I get an F1 car). Are there any open source projects that you know of demonstrating these ideas?

Interesting. I had assumed that the threading overhead would be low enough

Passing data between threads is incredibly expensive, so work done separately must be orders of magnitude bigger to absorb it.

A single vector3D can often be updated in <1 cycle (amortized cost). By passing it between threads, the cost of update becomes irrelevant. Same is true for SIMD. Manipulating a single value requires so much overhead that it runs slower. SIMD only offers improvement if there is enough data, otherwise overhead dominates.

All designs today strive towards batching.

that splitting the update integration and the collision systems to different cores would make sense[/quote]
Yes, but one core does updates, the other does collision. Both operate on their local copy of entire state they need.

But they must still be guaranteed to never do more work than slowest supported machine can handle. Amount of useful work must also exceed the cost of making a copy. Value kept in register is free, writing it to memory takes several cycles in best case. For simple multiplication or addition of vectors, performance will be memory bound. Goal then becomes how to perform as much work as possible while the values are still in registers.

For example, running memcpy or memset across multiple threads will likely run no faster, at least on most machines it will not scale up to 4 cores since memory bus can't keep up. CPUs continue to trend to <1 cycle per instruction, so goal of fast code is keeping stuff in registers for as long as possible.

Are there any open source projects that you know of demonstrating these ideas?[/quote]

All these optimizations are incredibly specific. Whatever source there would be, it would be optimized around author's specific problem and likely weeks of tuning with final data. There's little to no reuse at source level.

As mentioned, a single operation can add overhead of order of magnitude, negating any improvements.


General guideline remains: Computers are only good at running unconditional for loop over an array. Everything, from CPU, memory, network, disk is optimized for this case. All optimizations over past 10-20 years that went into hardware are trying to minimize the impact of all code that works differently.

As an example, imagine that web browser had to access every char of a web page by making a web request (get /home/0, get /home/1, get /home/2, ...) instead of asking once and receiving a stream. Yet OO centric design does just that.


A pragmatic approach - use existing physics libraries. They've solved all these problems already, at least you'll see how they do it and can later improve. There's simply too much knowledge in such libraries these days to simply start from scratch for sake of "learning".

This topic is closed to new replies.

Advertisement