Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualHodgman

Posted 05 August 2012 - 10:53 PM

Shared-memory style concurrency requires very careful synchronisation, and is therefore error prone and potentially very inefficient. The cost of locking/unlocking a mutex (the computer science concept, aka "critical section", not the windows object of the same name) is a red herring. The fact that you've got multiple threads contending for the usage of the same cache-lines of RAM is going to cause your processing cores to wait for each other at a hardware level. You should avoid any design that causes cache contention.

Message-passing style concurrency is the right default. You don't have any mutable data shared between threads at any time, so you don't need any explicit synchronisation. You instead chain together sequences of tasks which communicate via messages, and let the underlying messaging framework handle the implementation issues (which internally requires shared-memory). This is not only a high-performance design when implemented properly (it's the standard approach used in HPC systems like MPI) but is much less error prone for the programmer (deadlocks, race-conditions? nope). It's also based on well-defined branches of mathematics and can therefore be reasoned about within well defined logical frameworks, unlike shared-memory programming, which only allows for ad-hoc reasoning systems.

#3Hodgman

Posted 05 August 2012 - 10:51 PM

Shared-memory style concurrency requires very careful synchronisation, and is therefore error prone and potentially very inefficient. The cost of locking/unlocking a mutex (the computer science concept, aka "critical section", not the windows object of the same name) is a red herring. The fact that you've got multiple threads contending for the usage of the same cache-lines of RAM is going to cause your processing cores to wait for each other at a hardware level. You should avoid any design that causes cache contention.

Message-passing style concurrency is the right default. You don't have any mutable data shared between threads at any time, so you don't need any explicit synchronisation. You instead chain together sequences of tasks which communicate via messages, and let the underlying messaging framework handle the implementation issues (which internally requires shared-memory). This is not only a high-performance design when implemented properly (it's the standard approach used in HPC systems like MPI) but is much less error prone for the programmer (deadlocks, race-conditions? nope).

#2Hodgman

Posted 05 August 2012 - 10:49 PM

Shared-memory style concurrency requires very careful synchronisation, and is therefore error prone and potentially very inefficient.

Message-passing style concurrency is the right default. You don't have any mutable data shared between threads at any time, so you don't need any explicit synchronisation. You instead chain together sequences of tasks which communicate via messages, and let the underlying messaging framework handle the implementation issues (which internally requires shared-memory). This is not only a high-performance design when implemented properly (it's the standard approach used in HPC systems like MPI) but is much less error prone for the programmer (deadlocks, race-conditions? nope).

#1Hodgman

Posted 05 August 2012 - 10:45 PM

Shared-memory style concurrency requires very careful synchronisation, and is therefore error prone and potentially very inefficient.

Message-passing style concurrency is the right default.

PARTNERS