Utilizing Multiple core processors

Started by
40 comments, last by Hodgman 15 years, 10 months ago
Quote:Original post by Hodgman
Quote:Original post by wodinoneeye
You might have trouble making it 'lockless' if there are multiple generators (inserting into queue) and/or multiple consumers (removing from queue).

The Lockless Queue mechanism Im familiar with only works if there is only one generator and one consumer.
At the moment I'm using a locking queue (based on a futex) until the rest of the system takes shape, then I'll spend some time implementing a lock-free queue to replace it. Multiple generators are required but only one consumer.
Also, I'm not sure if I can take advantage of this, but the way things are currently structured, when the consumer starts to empty the queue the generators are already filling a different queue (so there is no need for concurrent push + pop operations to be safe.)

I've read a few papers that implement deques and linked-lists using single-word CAS which allow unlimited concurrent reads/writes, so I'm confident that it should work.
However, even if the algorithm is lock-free, the CPU's cache is still going to be doing it's own locking on any shared cache-lines, so the cache-contention might end up being a show-stopper anyway!




The between CPU cache contention will be there for any solution so its upward from there. Spin Locks actually work for more complex locking between multiple CPUs.

If your consumer is going to work on the previous cycle's queue you might be able to have each generator have its own 'new' queue (no queue insert lock needed) and at the cycle changeover time/point assemble all the queues (or not even aggregate them as the one consumer could walk an array of queues with trivial overhead).
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Advertisement
Quote:Original post by JPatrick
This is almost exactly the design I have planned for my own hobby engine; it's almost creepy :)
That makes me feel a little less crazy ;)
Quote:Flipping the buffers like you describe also avoids the overhead of copying all of the state, and is the approach I'm planning to take as well.
I'm only flipping the function queues, not the states. Each time I flip the function queues and update an entity, I also have to clone it's current state.
Quote:There is however a problem that's been vexing me for days though, which you mention at the end of your post, and that's the determinism of the write queue. Deffering all mutable calls until the next loop keeps the current loop atomic, but if the entities are being consumed by an arbitrary number of threads, the write queue may fill in a different order given exactly identical simulation progression. I still haven't reached a conclusion I'm satisfied with, and maybe you have some insight that will help me out. Here's the options I've pondered so far:
[1] Do nothing at all, and leave it the responsibility of the user to ensure that any potential combination of write commands are commutative.

[2] Sort them based on some criteria such that certain commands will always be first in a predictable way.

[3] Try to somehow enforce a simulation where non-commutative commands aren't allowed. This is essentially the same as option [1], but with more safeguards in place to prevent mistakes.
I'm planning on using [1] *and* [2] (Currently I'm just using [1] - i.e. do nothing at all!).

C++ doesn't support this concept of differed function calls, so obviously a lot of macro or template magic is required to make it usable. I went down the macro path, but boost's function binding templates probably would have been a better choice.
Anyway, now I have a bunch of macros for defining a deferred function like this:
//regular C++ declarations would look like this:void Heal( Player& who, float amt );void Explode( int howmuch );//My deferred functions declarations look like this:DEF_FUNC_2( Heal, Player&, who, float, amt );DEF_FUNC_1( Explode, int, howmuch );
These macros hide all the magic behind storing the function parameters in the queue, and later calling the actual function that does the real work.

My plan for implementing [2] is to add another parameter to the end of these macros that describes how multiple calls to the function on the same frame should be handled.
In the default mode, policy [1] above will be used - the function is allowed to be called multiple times, and it is up to the function-writer to ensure that any order of calls will be commutative.
In the second mode, one of the function arguments will be specified as a sorting-key. All calls to this function will be grouped together, and executed after being ordered by this key.
In the third mode, one of the function arguments will again be specified as a sorting-key, but only the call that is at the front after sorting will be executed, the rest will be ignored.

Hopefully, it should look something like this:
//heal function is commutative. Call as many times as you like.DEF_FUNC_2( Heal, Player&, who, float, amt, COMMUTATIVE );//Die function should only be called once. Sort by the damage_done field.DEF_FUNC_2( Die, Player&, killer, float, damage_done, damage_done);
This way, most functions can use policy [1], and the few that need it can use policy [2] as needed.
When using policy [2], you'll still have to be careful that there the sorting key is always unique though... otherwise the ordering still isn't deterministic! Or perhaps if there is a 'draw' when sorting, then the function simply won't execute.
e.g. We can say that if the physics sub-system calls SetPosition, it should override a call by the game logic:
const int PRIORITY_GAME_LOGIC = 0;const int PRIORITY_PHYSICS = 1;void SetPosition( vec3 pos, int priority );
But, I'm still screwed if two different "game logic" objects both try to call SetPosition at once... Seeing one of the calls can't be chosen over the other, I could treat this like an ethernet packet collision, and ignore them. Hopefully next frame they will try to call SetPosition again, and hopefully there will only be one call instead of two...

[edit]
Quote:Original post by wodinoneeye
If your consumer is going to work on the previous cycle's queue you might be able to have each generator have its own 'new' queue (no queue insert lock needed) and at the cycle changeover time/point assemble all the queues (or not even aggregate them as the one consumer could walk an array of queues with trivial overhead).
That's a great idea! I've already got a fast way to get an unique ID for the current thread, so I could make one queue per thread and eliminate the write-contention. Thanks for the inspiration! [edit2] If I've thought about this correctly, that upgrades the algorithm to "wait free" instead of just "lock free"!

[Edited by - Hodgman on May 30, 2008 12:08:19 AM]
I hope you don't mind me questioning a few things about this system, Hodgman.

Quote:Original post by Hodgman
In my buffered state system each entity maintains a variable amount of redundant (old) states. On a single-core single-player game, this could be a single state (no old states), or in a multi-player FPS it could be enough states to track the last second of game-play. On a multi-core machine it should be at least 2 to facilitate parallelism. The newest state is always known as the Write state, and all others are the Read states.


Why the arbitrary number of states? Just for the FPS networking model?

Quote:As well as maintaining states, each entity maintains two pending function-call queues (named Read/Write). Any non-const functions calls are not immediately executed, and are enqueued in the 'W' queue to be executed on the next update cycle (at which time a new W state is created for them to be executed on). Const functions on the other hand are executed immediately on the latest R state.


The first problem I see here is that there are many operations that require reads of other entities in order to write to another entity or entities. This doesn't fit a clean read/write divide - a write is dependent on one or more reads, often to be executed sequentially. How does your system handle it?

Quote:At the pre-update stage (a serial stage), each entity that has been modified (i.e. has data in it's 'W' queue) is told to swap over it's read/write queues (i.e. the full W queue becomes the new R, and the empty R becomes the new W).


I don't understand why you are doing this. Isn't data in the W queue evidence that it Will be modified, rather than it Has been modified? Why are non-const functions now on what you call the Read queue? I'm afraid I really don't understand this part.

Quote:After this, each entity can be updated in parallel. As it drains and executes the functions in it's R queue, it will likely be filling up the W queue of itself or other entities.


Are you saying that the 'R' queue isn't really an 'R' queue at this point? And how can it query other entities - which state does it access? And isn't 2 queues per entity potentially quite a bit of overhead? I assume these are lockless...

Quote:
Serial task:  For each modified entity:    entity->SwapFunctionQueues()Parallel task:  For each modified entity:    entity->CloneCurrentWState()    entity->RunAllQueuedFunctions()Serial task:  wait for the parallel tasks to finish

Question 1: Where does rendering fit into this?
Question 2: Doesn't this dictate that the whole system is limited by the speed of the slowest/most busy entity?
Quote:The first problem I see here is that there are many operations that require reads of other entities in order to write to another entity or entities. This doesn't fit a clean read/write divide - a write is dependent on one or more reads, often to be executed sequentially. How does your system handle it?


If you need multiple reads from old state, then it's not a problem. Consider finding AoE effect. You will test against all entities in R state, and find 52 that are affected, while doing hundreds of reads. But since R state can no longer change, you're free to read from it as many times as you want. It merely describes what was at T-1.

The problem remains, how to handle multiple events in the past. For example, someone gets hit 5 times. If you apply this individually, and always read old state, you will only take into consideration last hit, since you'll always decrement the same, T-1, value.

To handle this, you need to adopt slightly different update strategy. 'hit' is processed in W state, where old values were copied. By decrementing that value, you obtain the correct result.

Third problem: What happens if damage dealt depends on attacker's health? Since we cannot access W state of other actor, the damage dealt will always be evaluated based on R state. But since that one is getting hit at the same time, their health decreases in W state.

This last problem is a pecularity of this model. You can either increase granularity, so that each hit is processed once a tick (shouldn't be a problem, firing a weapon more than 30/60 times a second isn't likely.

Other than that, you assume that for events that occur within single time-step quantum you cannot distinguish their proper interactions. Since such simulation can run at 30/60Hz (for real-time), it's somewhat difficult to justify the need for proper scheduling. It does not however imply non-deterministic behavior. You simply assume that for some actions, you will be using stale state.

If this type of inaccuracy is a problem, you simply handle such sequences through individual events. Rather than applying damage directly in 'hit' event, you fire another message 'deal_damage', containing both attacker's health, as well as damage to deal.

This results in typical message-based model.

Quote:Original post by Kylotan

Question 1: Where does rendering fit into this?


You render the R state whenever progress is made, or whenever you want. R state is fixed, and won't change anymore, so render it once or 10 times, it has no effect on concurrency. You don't even need to lock it, since by then it's immutable. It will be stale by at least one tick, but given adequate update rates, it isn't any worse than usual update loop.

Quote:Question 2: Doesn't this dictate that the whole system is limited by the speed of the slowest/most busy entity?


In this way it's superior to single-threaded model, where update speed is limited by sum of all entities.

This model does not exclude use of asynchronous actions, which may take seconds, even minutes to complete. It's for real-time only.
I am still reading this post, and I have no background with multi thread programing.

but I can say this much.

the talk about synchronizing multi thread operations sounds a little bit silly to me when you consider the principle of multi thread programing.

We program games to work like scripted plays, or planed dances. everything happens in order and data is handled one at a time, an operations stack is build and then resolved. The result is that two workers can not make one job go faster. However, when we stop treating the game as a scripted event and more like a physical world, we can consider the possibility of letting one core handle the graphical interpretations of what the game has stored in memory, while letting another core handle the act of ruining that world. The result is binding the games functionality to the memory fields.

Quote:Original post by Antheus
Quote:Question 2: Doesn't this dictate that the whole system is limited by the speed of the slowest/most busy entity?


In this way it's superior to single-threaded model, where update speed is limited by sum of all entities.


Sure, but the worst-case complexity is about the same, and I'd assume that entities would vary quite widely in this regard. I suppose it wouldn't matter too much if entities were distributed across processors in a reasonable matter, but any sort of system which has to synchronise all entities that regularly is going to be less efficient than a system which synchronises less often.
Quote:Original post by Hodgman
...Seeing one of the calls can't be chosen over the other, I could treat this like an ethernet packet collision, and ignore them. Hopefully next frame they will try to call SetPosition again, and hopefully there will only be one call instead of two...


I like this; food for thought indeed. What I'm considering now is changing the concept of a command to that of a request. Somewhat similar to a UDP packet, when a request is made to an entity, it is not guaranteed to be acknowledged. This leaves a few options to the requester: they can fire-and-forget the request not caring if it works or not, block until the request goes through, try a few times then give up, etc.

Another implication of this that I like is that entities are wholly responsible for their own state. In the example of a bullet hitting a player, rather than the bullet delivering damage to the player, the player would instead detect that it has been hit by a bullet and deliver damage to itself appropriately. Similarly, the bullet will be responsible for uncreating itself if it hits a player, ricocheting if it hits a wall, continuing on if it hits glass, etc.

Implementing this is still a bit tricky though, because detecting request collisions is subtley complex. Let's say we have requests A, B, and C again. A is non-commutative with B, but is commutative with C. Request B however is non-commutative with both A and C. If A and B are in the queue first, we'd detect their incompatibility and nuke both of them, leaving C which is fine by itself. However, if C and B were first, they'd both be nuked, leaving A. This requires a recursive walking of the queue to detect the full chain of non-commutativeness, which strikes me as inefficient, but then again, profiling would be required to be sure, and pathological cases will exist for any system.

Quote:Original post by Hodgman
I'm only flipping the function queues, not the states. Each time I flip the function queues and update an entity, I also have to clone it's current state.


I stil think cloning can be avoided, even with more than 2 stored states by using a rolling buffers approach. Let's say you wanted 30 back-states stored. You could have a circular list of 30 state buffers. Let's say buffer 0 is the initial game state, making buffer 1 the current "write" state. Once that loop is finished, buffer 2 becomes the new write state, leaving buffer 1 as the read state, and buffer 0 still intact as a previous state. This would continue up to buffer 29, at which point buffer 0 would be marked as the write state again and overwritten. A circular linked list could probably do this nicely.

Each potentially mutable value of an entity could have its own buffer ring, each entry of which could be marked with the game-loop timestamp of when that value was written, which would allow you to figure out what value was in effect, say, 17 loops ago, and prevents needless cloning of unchanged values just to fill the next buffer. You would of course have an accessor method that does that computation for you. This would be slower than a simple array index, but would still be constant time if you only have a fixed number of buffers in the ring.

Thoughts?
Quote:Original post by TayZrain
I am still reading this post, and I have no background with multi thread programing.

but I can say this much.

the talk about synchronizing multi thread operations sounds a little bit silly to me when you consider the principle of multi thread programing.

We program games to work like scripted plays, or planed dances. everything happens in order and data is handled one at a time, an operations stack is build and then resolved. The result is that two workers can not make one job go faster. However, when we stop treating the game as a scripted event and more like a physical world, we can consider the possibility of letting one core handle the graphical interpretations of what the game has stored in memory, while letting another core handle the act of ruining that world. The result is binding the games functionality to the memory fields.

It's a little more complex than that.

Many parallel algorithm developers follow the PCAM model:

P- partition the problem into the smallest discrete chunks
C- communications (such as shared memory) between those problems is considered
A- agglomeration of tasks based on work and communications patterns
M- mapping of tasks to individual processes and threads, and balancing work loads


How you partition the tasks is the most critical aspect.

If you partition it so rendering is a complete chunk, then rendering can never take more than a single core. If AI is a complete chunk, then it also cannot take more than a single core. At this rate, you run out of tasks and the computer sits idle.

Consider a different partition. Perhaps many thousands of elements, with their own state machine. Now you can partition the processing so each available processor handles a fraction of the AI. This extends to any number of processors, and changes the bottleneck from processing time to communication/synchronization time.


The partitioning of tasks drives the algorithms you choose. You might be partitioning over time (current frame, next frame). You might be partitioning this across processor threads. Whatever you choose to do, when designing for multithreaded and multiprocessing environments you must carefully consider the partitioning of work at the beginning (not the end) of development. The industry is changing as studios realize they must make these decisions early on, and they must be considered at every phase of development.
Quote:Original post by TayZrain
...consider the possibility of letting one core handle the graphical interpretations of what the game has stored in memory, while letting another core handle the act of ruining that world.
This approach (while better than single-threaded) is still not scalable. You're designing for a fixed number of cores instead of for any number of cores.

Quote:Original post by Kylotan
I hope you don't mind me questioning a few things about this system, Hodgman.
Not at all. Discussing my thoughts out loud helps me solidify them, making it easier to explain to other people later ;) (I guess that makes you my explanation-lab-rat, sorry!)
Quote:Why the arbitrary number of states? Just for the FPS networking model?
I'm writing this as a reusable collection of classes in my engine. I want to be able to use if for different kinds of games, so I'm using policy-based templates to control how states are stored. I.e. it's arbitrary at compile time, and can be arbitrary or fixed at runtime.

There's various (and often unforeseen) reasons that you would require a different fixed or variable number of states, so I'm trying to design my classes with this in mind.

For example: In some cases only two states are required for the update cycle to run (the older of the two would simply be recycled each update).
In this case if the renderer and the update cycle were to run at different frequencies then a third state object would be required, because for the period of time that the renderer is running the state objects that the renderer was told to use are essentially under a read-only lock and are unable to be (safely) recycled by the update loop.
Quote:
Quote:As well as maintaining states, each entity maintains two pending function-call queues (named Read/Write). Any non-const functions calls are not immediately executed, and are enqueued in the 'W' queue to be executed on the next update cycle (at which time a new W state is created for them to be executed on). Const functions on the other hand are executed immediately on the latest R state.
The first problem I see here is that there are many operations that require reads of other entities in order to write to another entity or entities. This doesn't fit a clean read/write divide - a write is dependent on one or more reads, often to be executed sequentially. How does your system handle it?
Reads are done on the previously completed state objects, while writes are done on new thread-private state objects.

R and W probably weren't the best names for the func-call queues... Let's just call them A and B. Also, over the weekend I've implemented wodinoneeye's suggestion for removing contention from these queues. So let me explain again:

There are 2 groups of function-call queues, A and B. Each group contains one queue per thread, so there is no need to lock these queues. At any one time, one of these groups will be public (previously: W) and one will be private (previously: R). While public, a queue group can be filled in by any entity/thread (each thread pushes into it's own queue within the group only!). While private, a queue group is only accessible by it's parent entity and by the thread that is responsible for updating that entity.

At the prepare-commit stage, (serial stage) all entities flip A and B. If A was previously public, then it may contain pending function calls. If A was previously private, it will be empty (because it would have been consumed during the previous commit).
So now the public queue will be empty (ready to collect changes generated by the coming commit stage) and the private queue will contain all of the changes requested on the previous commit stage.
Also, if a new thread-private state object was created on the previous commit, then this state object is made public as the current read-state of the entity.

During the commit stage, (parallel stage, up to one thread per entity) each entity will generate a new thread-private state object and iterate through it's private queue group, executing all of the function calls contained within on the new private state object.
This in turn will fill up the public queue groups of itself/other entities.

If entity X attempts to read entity Y (i.e. call a const function on it), that function call is executed on the last publicly known state object associated with entity Y.

If entity X attempts to modify entity Y (i.e. call a function on it), that function call is put into entity Y's public queue, to be executed *next* commit stage.

Quote:
Quote:At the pre-update stage (a serial stage), each entity that has been modified (i.e. has data in it's 'W' queue) is told to swap over it's read/write queues (i.e. the full W queue becomes the new R, and the empty R becomes the new W).
I don't understand why you are doing this. Isn't data in the W queue evidence that it Will be modified, rather than it Has been modified? Why are non-const functions now on what you call the Read queue? I'm afraid I really don't understand this part.
Yes it is evidence that it is about to be modified. I probably worded it that way because internally I'm using an 'isModified' variable to describe this...

Non-const never go on any queue - The badly-named Read queue is just a thread-private area of memory, not a separate queue for read-functions.
While the public (W) queue group is filled by n threads, the private (R) queue is consumed by a single thread.

At pre-commit the public (W) queue contains all requests made during the previous commit. I want to execute this during the next commit, but the next commit will also re-fill the public (W) queue.
Therefore, before the commit stage, I move the contents of the public (W) queue into a private (R) area (making the public queue empty).

Quote:Are you saying that the 'R' queue isn't really an 'R' queue at this point? And how can it query other entities - which state does it access? And isn't 2 queues per entity potentially quite a bit of overhead? I assume these are lockless...
Yeah, I hope I've explained it better this time. R is just the private area where W is moved to before W needs to be used again.
On my quad-core, its now at 8 queues per entity! Two groups (A/B) of one queue per thread. Because each thread has it's own queue, the queues don't require locks, and don't even have to be fancy lock-free ones. You can use a std::deque and the algorithm is wait-free and thread-safe.

Quote:Where does rendering fit into this?
After this whole update cycle runs, the game either runs it again (if it's not time to render yet, or the renderer is still running), or it starts the renderer (it must pass the current frame-id token to the renderer so the appropriate state objects can be accessed) and then runs again.

Quote:Original post by Kylotan
Quote:Original post by Antheus
Quote:Question 2: Doesn't this dictate that the whole system is limited by the speed of the slowest/most busy entity?
In this way it's superior to single-threaded model, where update speed is limited by sum of all entities.
Sure, but the worst-case complexity is about the same, and I'd assume that entities would vary quite widely in this regard. I suppose it wouldn't matter too much if entities were distributed across processors in a reasonable matter, but any sort of system which has to synchronise all entities that regularly is going to be less efficient than a system which synchronises less often.
I don't think it's a problem that the worst-case scenario is just as slow as traditional methods. Most games are going to have hundreds of small entities, not 1 giant one, and in these average cases then this threading system will probably perform much better.
There is no real entity synchronisation though, except for the pre-update stage which is must be done serially. The commit stage that follows is entirely parallel with no synchronisation used at all.


Quote:Original post by JPatrick
Implementing this is still a bit tricky though, because detecting request collisions is subtley complex. Let's say we have requests A, B, and C again. A is non-commutative with B, but is commutative with C. Request B however is non-commutative with both A and C. If A and B are in the queue first, we'd detect their incompatibility and nuke both of them, leaving C which is fine by itself. However, if C and B were first, they'd both be nuked, leaving A. This requires a recursive walking of the queue to detect the full chain of non-commutativeness, which strikes me as inefficient, but then again, profiling would be required to be sure, and pathological cases will exist for any system.
Hmm... I hadn't actually though about the situation where A and B are non-commutative; I had only thought about where A1 conflicts with A2, or B1 with B2.
Would it be possible to write code so that all A/B conflicts can be reduced to an A1/A2 conflict instead?
It would be much easier to manage if each function-call-type only had to be checked against others of the same type, and not against function-calls of all types.

For example:
A = TakeFireDamage( Player& from, int damage );
B = TakeColdDamage( Player& from, int damage );

In this case, A and B can conflict, but if we re-write it like this:
A = TakeFireDamage( Player& from, int dmg )
-> calls C( from, dmg*getFireResist() );
B = TakeColdDamage( Player& from, int dmg )
-> calls C( from, dmg*getColdResist() );
C = TakeDamage( Player& from, int damage );

Now A and B no longer conflict, and only C can conflict with itself.

Can you come up with any examples that can't be reduced down like this?
Quote:I stil think cloning can be avoided, even with more than 2 stored states by using a rolling buffers approach. Let's say you wanted 30 back-states stored. You could have a circular list of 30 state buffers. Let's say buffer 0 is the initial game state, making buffer 1 the current "write" state. Once that loop is finished, buffer 2 becomes the new write state, leaving buffer 1 as the read state, and buffer 0 still intact as a previous state.
Yeah when I said clone, I didn't mean to imply a certain storage type behind the scenes - I'd still call your ring-buffer "cloning", because each time the write-cursor is incremented, don't you still have to copy buffer[oldWrite] into buffer[newWrite]?
[edit]Oh I see, if buffer only stores states for a single variable, then no "cloning" is needed, because a new state is only created when writing to the buffer.
In my system, where I buffer a whole state-object, I need to "clone" it first in order to ensure that un-modified fields are propagated through the buffer..[/edit]
Quote:Each potentially mutable value of an entity could have its own buffer ring, each entry of which could be marked with the game-loop timestamp of when that value was written, which would allow you to figure out what value was in effect, say, 17 loops ago, and prevents needless cloning of unchanged values just to fill the next buffer.
I think tracking changes for each individual value will cause more overhead than the unnecessary copies caused by tracking changes for a whole object. This is just a guess though, serious investigation and profiling is needed here ;)

[edit2] Sorry for hijacking this thread with my long posts!

[Edited by - Hodgman on June 1, 2008 9:23:34 PM]
Quote:Original post by Hodgman
Hmm... I hadn't actually though about the situation where A and B are non-commutative; I had only thought about where A1 conflicts with A2, or B1 with B2.
Would it be possible to write code so that all A/B conflicts can be reduced to an A1/A2 conflict instead?
It would be much easier to manage if each function-call-type only had to be checked against others of the same type, and not against function-calls of all types.


I see what you're saying, and yeah, that would make it a lot easier to handle collissions. The easiest way to reduce everything to A1/A2 style conflicts would be to allow only field-setters in the write queue. That is, functions that modify a single field only. There's a few issues with this approach as well though.

First, it puts pressure on your class interface, in that for each internal field, the class would need to expose one and only one possible setter (either a setter that effects only one field, or a compound setter that effects several). I imagine this would become increasingly difficult to enforce as entities become richer and their interactions more complex. As an example, if you had a TakeDamage() command that decremented your health as well as knocking you backwards. This would prevent you from having any other commands that affect velocity but not health, unless you issue those higher level commands as a series of atomic ones, but that brings up the second problem.

If you split higher-level commands into a series of atomic ones, and push the series on to the write queue, this opens up the potential for half-completed commands that result in conceptually inconsistent state. Using the TakeDamage() example again, it consists of two parts: changing health, and changing velocity. If there were another command in the queue that was also a change in health, those two commands would cancel out, but it would leave the change velocity command. The player would perceive that as getting knocked back, but without actually taking any damage at all.

Personally, I'm not too comfortable with allowing half-complete commands. They should either complete fully, or not at all. The only way I can think to do this though is bundling the commands, but that puts us right back where we started with the whole recursion issue.

The more I think about this, the more I feel like I'm not seeing the forest for the trees. There has to be a simpler solution that's "good enough." This kind of theoretical, general case, pathology resistant musing is started to sound like a good case for the "write games, not engines" approach :)

Quote:Original post by Hodgman
I think tracking changes for each individual value will cause more overhead than the unnecessary copies caused by tracking changes for a whole object. This is just a guess though, serious investigation and profiling is needed here ;)


Very possibly. That was more of a devil's advocate suggestion on my part. In the end I suppose it depends on the topology of your entities. If they contain a ton of fields, individual tracking might help, if they're smaller, wholesale cloning would likely be faster.

This topic is closed to new replies.

Advertisement