Jump to content

  • Log In with Google      Sign In   
  • Create Account

MaxDaten

Member Since 31 Jan 2013
Offline Last Active Mar 15 2013 01:12 PM

#5028004 [C#/C++]Multithreading

Posted by MaxDaten on 01 February 2013 - 07:53 PM

Approach 1) We put the decompression code into a separate background thread, which sleeps unless it has work to do. When it does have work to do, we're relying on the OS's thread scheduler to choose which thread is running on the single CPU core. By default on windows, the scheduler granularity is 15ms, so the decompression thread will require 67 time-slices to complete it's 1 second task. If our main thread is attempting to run at fixed real-time frame-rate of 60Hz, then during the time that the decompression thread is awake, this is now impossible. From time to time (unpredictable), the main thread will be put to sleep for an entire 15ms time-slice (or maybe multiple time-slices).
That kind of unpredictability is simply not acceptable to a real-time application.
 
Approach 2) We manually time-slice the decompression code, so that after it's run for ~1ms (or some other chosen threshold), it stores it's state and returns/yields -- a.k.a. cooperative multi-tasking. We run the decompression code on the "main thread" every frame, knowing that the biggest interruption that this task can have is a very predictable 1ms per frame.

 

I guess I expressed me wrong.

 

I've tried to outline this dilemma and misunderstanding. My statement was meant to be: you better don't use any mt approach to speed up your application, regardless the core count. You use mt to run things at the same time (for games in the same frame). That's independently which high/low level approach you choose. I completely agree with you, approach 1 is the worst case for a single core and approach 2 is more predictable, yes. But these approaches differ "only" in detail of the level (which is not unimportant and will have a deep impact, indeed). You showed that it's sometimes better for the application to manage its (time) resources on its own. But this added complexity to the project and shouldn't be underestimated (for example: you will loose deterministic).

 

And again, even (or especially) for games, you choose an mt approach not to make the game performance better. If a game dev thinks "uhm, my performance is to bad, let's switch mt on, I hope it will get better", it's the wrong motivation for mt. The best motivation to use any low or high level mt approach is, to let happen things parallel. For example: seamless environment streaming. In fact, if you choose approach 2 (aka high level mt), you will loose performance, if you measure your performance in fps-count, which is not a good performance meter and an other topic.




#5027974 [C#/C++]Multithreading

Posted by MaxDaten on 01 February 2013 - 05:10 PM

Multithreading is often misunderstood, even under devs. Multithreading is primary used for parallelism and not to speed things up. For example, in games multithreading is ideal to keep your game responsive while the game is loading some resources (for the next area), the user is doing some inputs, or the AI is calculating (re)actions.

 

Yes you can achieve speed ups with mt, and mt is often used for speed ups, for example the rendering in suites like 3ds or Maya. But your problem must be suited to be run in a parallel way. And in most cases the speed up is far away from a linear speed up. With a perfect linear speed up you will gain potentially 300% performance with a quad-core, this seems huge. But a linear speed up is unrealistic. You have to organize (Mutex, MVar, synchronize, STM) the different processes or threads on their meeting-points, and that results into a slow down. It's utopian that a whole game problem will gain a 300% speed up, even +100% is far away from reality. In most cases you will solve specific sub-problems with mt or, and that is the most common way, you decoupling sub-systems from each other to be run parallel on their own processing unit.

 

MT is often a trade-off. MT will make your project much more complex. More complexity will make your project more error-prone and will slow down the whole project progress. Your code-base is more fragile and "uglified". Whats the benefit? More responsiveness, that's fine!. 10%-30% "speed up", maybe not worth it.

 

I highly guess the EVE Online client makes use of parallelism, but not in that way the gamer expect. The gamer takes a look at the task manager and complains about the single cpu usage. But in which situation? Maybe this situation is not really parallelizable. For example: Pumping Data to the graphic card is not well parallelizable, sometime not even possible. Let me speculate: EVE Online parallelize the client view, the network and resource loading. In the huge fleet fight, when everything is loaded on the client-side, the bottleneck will be the rendering. And when the cpu part of the rendering isn't well parallelizable, the cpu usage is reduced to the only rendering core.




PARTNERS