The future of massive multithreading?

Started by
5 comments, last by Scourage 12 years, 7 months ago
I have a question for you!

[size="4"]Premise: Given the current state of technology, the cost of multithreading (task switching overhead, data-sharing wait-loops and vastly increased complexity) most often outweighs the benefits. Especially if you use more than a handful of threads and/or don't know what youre doing.
[size="4"]Hypothesis: There might still be a point to massively multithread your programming project, IF you expect the project to live for many years, since we expect multithreading performance to vastly increase in the near future. And by massive, I mean splitting the most computation-intensive task into several hundred (or some other Big Number) parallell threads. In essence getting superior performance in the future, by paying with terrible performance in the now.

What's your thoughts on this? Silly? Potentially true?
Is there a long-term benefit to massively multithread now?
Advertisement
If you think you can use hundreds of threads you're better off using some sort of task abstraction since you'll likely want to/need to split the logical parts across machines; even in the non-near future.
The cost of multithreading is task switching overhead, data-sharing wait-loops and vastly increased complexity, if you don't know what youre doing.
Fixed that for you. If designed properly (read: not shared memory + mutual exclusion paradigm) then the above costs don't exist.
by massive, I mean splitting the most computation-intensive task into several hundred (or some other Big Number) parallell threads. In essence getting superior performance in the future, by paying with terrible performance in the now. What's your thoughts on this? Silly?[/quote]You should only create one thread per hardware-thread. If your current hardware only has 4 hardware-threads, then yes, creating 100 software threads is indeed extremely silly.
However, if you designed your tasks so they can be split between anywhere from 1 to 100 threads, then it's not silly at all - you get great performance now and greater performance in the future. Yes there is a long-term benefit.

Fixed that for you. If designed properly (read: not shared memory + mutual exclusion paradigm) then the above costs don't exist.

True, although I find it troublesome to "design properly" to the extent you're talking about, at least with the tasks I'm used to deal with (game programming). Being able to do that sounds like savant skills to me!


However, if you designed your tasks so they can be split between anywhere from 1 to 100 threads, then it's not silly at all - you get great performance now and greater performance in the future. Yes there is a long-term benefit.

Good idea, and that was a lovely link you found me (i read it all), thank you!

the cost of multithreading [...] most often outweighs the benefits [...] if [...] you don't know what youre doing.


Well that statement is certainly true. If you don't know how to use a tool then you're probably doing it wrong.
Massive multithreading is the wrong approach to most problems, and always will be.

Asymmetric multiprocessing is far more likely to become the standard model for parallel computation in the future, and it isn't hard to see that we are already strongly moving in that direction. I've been predicting the AMP trend for several years now, and evidence continues to mount that this is how life will look in another 10 years.

AMP, in case you aren't familiar with it, is a generalization of Symmetric Multiprocessing or SMP which is used in multicore/multiprocessor machines today. AMP is about having multiple heterogeneous processor types in the same machine. The rise of the GPU was the first strong indicator of AMP's potential for long-term trends, and APUs are the next logical step. As additional hybrids and computation models become available, AMP will become increasingly important.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

So massively threading isn't really the answer, but it's close. Writing your software to be parallel is what I think is closer to the answer. With parallel applications, your data and the modification of that data occurs in 2 threads on a dual core machines or 100 threads on your shiny, new Centa-core processor (yes, I know I made that up). The point is that the application should be able to scale up the number of worker threads to work on the data in order to take advantage of the hardware that it is running on. The trend for hardware manufacturers over the last several years is to grow out and not up in terms of speeds. Adding more cores seems to be the trend, and I don't see it going away anytime soon.

[size="3"]Halfway down the trail to Hell...

This topic is closed to new replies.

Advertisement