Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualOlof Hedman

Posted 14 September 2012 - 08:42 AM

This means that a) you can achive some form of parallelization without actually coding it and b) your results might not be as good as you expect it to be (since there is already a lot of optimization going on behind the scenes) or even worse then what the compiler achives for you.


I don't think any compiler would do anything like making your single threaded program multithreaded.
What you think about is probably wide data instructions, like SIMD, that the compiler can insert to process data faster, though still in a single thread.

If you start a new thread, each thread can run an additional line at the same time. Two threads means twice as many lines. Ten threads mean ten times as many lines.


Not really... you can't run more threads in parallell then you have hardware threads in the CPU. With more os threads, they need to share the cpu threads by classic preemptive multitasking. 2 threads per core on an i7 with hyperthreading enabled. though those threads are not really fully parallel either. And you can benefit from more os threads then hardware threads, since threads needs to stall sometimes waiting for memory and disk. CPU architecture is complicated Posted Image

Anyhow, processes and threads are not really part of any programming language, they are part of the OS, and are exposed through various API:s to the programming languages.

In a way, you can see the process as your applications "container" in the OS, while its loaded into RAM, and which keeps record of any memory you have allocated to it, threads started, files opened, etc.
threads are the units of execution in your process, that runs code, and share memory between all other threads in that process.
There is always at least one, the "main thread" (and in c, has the entry point "void main(int argc, char** argv)")

Since threads share memory, any memory accesses they do has to be controlled. This is where thread programming can get messy.
The easiest and (when done right) highest performant solution is to make sure that threads are never accessing the same areas in memory at the same time, by having their own copy of everything they need.
You will always need some synchronisation points, but the less you have, the less risk you have for deadlocks and threads just sleeping waiting for other threads to complete.
Synchronisation is done through special objects called "locks", "mutexes", "semaphores", and probably more names I don't recall now. They all work more or less the same though.

Some programming languages have advanced built in features to make it easier to program multithreaded, and structure your multithreaded program, but it all boils down to semaphores, copying data around, and launching thread entry point functions, in the end.

#8Olof Hedman

Posted 14 September 2012 - 08:37 AM

This means that a) you can achive some form of parallelization without actually coding it and b) your results might not be as good as you expect it to be (since there is already a lot of optimization going on behind the scenes) or even worse then what the compiler achives for you.


I don't think any compiler would do anything like making your single threaded program multithreaded.
What you think about is probably wide data instructions, like SIMD, that the compiler can insert to process data faster, though still in a single thread.

If you start a new thread, each thread can run an additional line at the same time. Two threads means twice as many lines. Ten threads mean ten times as many lines.


Not really... you can't run more threads in parallell then you have hardware threads in the CPU. With more os threads, they need to share the cpu threads by classic preemptive multitasking. 2 threads per core on an i7 with hyperthreading enabled. though those threads are not really fully parallel either. And you can benefit from more os threads then hardware threads, since threads needs to stall sometimes waiting for memory and disk. CPU architecture is complicated Posted Image

Anyhow, processes and threads are not really part of any programming language, they are part of the OS, and are exposed through various API:s to the programming languages.

In a way, you can see the process as your applications "container" in the OS, which keeps record of any memory you have allocated to it, threads started, files opened, etc.
threads are the units of execution in your process, that runs code, and share memory between all other threads in that process.
There is always at least one, the "main thread" (and in c, has the entry point "void main(int argc, char** argv)")

Since threads share memory, any memory accesses they do has to be controlled. This is where thread programming can get messy.
The easiest and (when done right) highest performant solution is to make sure that threads are never accessing the same areas in memory at the same time, by having their own copy of everything they need.
You will always need some synchronisation points, but the less you have, the less risk you have for deadlocks and threads just sleeping waiting for other threads to complete.
Synchronisation is done through special objects called "locks", "mutexes", "semaphores", and probably more names I don't recall now. They all work more or less the same though.

Some programming languages have advanced built in features to make it easier to program multithreaded, and structure your multithreaded program, but it all boils down to semaphores, copying data around, and launching thread entry point functions, in the end.

#7Olof Hedman

Posted 14 September 2012 - 07:43 AM

This means that a) you can achive some form of parallelization without actually coding it and b) your results might not be as good as you expect it to be (since there is already a lot of optimization going on behind the scenes) or even worse then what the compiler achives for you.


I don't think any compiler would do anything like making your single threaded program multithreaded.
What you think about is probably wide data instructions, like SIMD, that the compiler can insert to process data faster, though still in a single thread.

If you start a new thread, each thread can run an additional line at the same time. Two threads means twice as many lines. Ten threads mean ten times as many lines.


Not really... you can't run more threads in parallell then you have hardware threads in the CPU. 2 per core in an i7 with hyperthreading enabled. though those threads are not really fully parallel either. And you can benefit from more threads then hardware threads, since threads needs to stall sometimes waiting for memory and disk. CPU architecture is complicated Posted Image

Anyhow, processes and threads are not really part of any programming language, they are part of the OS, and are exposed through various API:s to the programming languages.

In a way, you can see the process as your applications "container" in the OS, which keeps record of any memory you have allocated to it, threads started, files opened, etc.
threads are the units of execution in your process, that runs code, and share memory between all other threads in that process.
There is always at least one, the "main thread" (and in c, has the entry point "void main(int argc, char** argv)")

Since threads share memory, any memory accesses they do has to be controlled. This is where thread programming can get messy.
The easiest and (when done right) highest performant solution is to make sure that threads are never accessing the same areas in memory at the same time, by having their own copy of everything they need.
You will always need some synchronisation points, but the less you have, the less risk you have for deadlocks and threads just sleeping waiting for other threads to complete.
Synchronisation is done through special objects called "locks", "mutexes", "semaphores", and probably more names I don't recall now. They all work more or less the same though.

Some programming languages have advanced built in features to make it easier to program multithreaded, and structure your multithreaded program, but it all boils down to semaphores, copying data around, and launching thread entry point functions, in the end.

#6Olof Hedman

Posted 14 September 2012 - 07:42 AM

This means that a) you can achive some form of parallelization without actually coding it and b) your results might not be as good as you expect it to be (since there is already a lot of optimization going on behind the scenes) or even worse then what the compiler achives for you.


I don't think any compiler would do anything like making your single threaded program multithreaded.
What you think about is probably wide data instructions, like SIMD, that the compiler can insert to process data faster, though still in a single thread.

If you start a new thread, each thread can run an additional line at the same time. Two threads means twice as many lines. Ten threads mean ten times as many lines.


Not really... you can't run more threads in parallell then you have hardware threads in the CPU. 2 per core in an i7 with hyperthreading enabled. though those threads are not really fully parallel either. And you can benefit from more threads then hardware threads, since threads needs to stall sometimes waiting for memory and disk. CPU architecture is complicated Posted Image

Anyhow, processes and threads are not really part of any programming language, they are part of the OS, and are exposed through various API:s to the programming languages.

In a way, you can see the process as your applications "container" in the OS, which keeps record of any memory you have allocated to it, threads started, files opened, etc.
threads are the units of execution in your process, that runs code, and share memory between all other threads in that process.
There is always at least one, the "main thread" (and in c, has the entry point "void main(int argc, char** argv)")

Since threads share memory, any memory accesses they do has to be controlled. This is where thread programming can get messy.
The easiest and (when done right) highest performant solution is to make sure that threads are never accessing the same areas in memory at the same time, by having their own copy of everything they need.
You will always need some synchronisation points, but the less you have, the less risk you have for deadlocks and threads just sleeping waiting for other threads to complete.
Synchronisation is done through special objects called "locks", "mutexes", "semaphores", and probably more names I don't recall now. They all work more or less the same though. (lets leave that for later)

Some programming languages have advanced built in features to make it easier to program multithreaded, and structure your multithreaded program, but it all boils down to semaphores, copying data around, and launching thread entry point functions, in the end.

#5Olof Hedman

Posted 14 September 2012 - 07:41 AM

This means that a) you can achive some form of parallelization without actually coding it and b) your results might not be as good as you expect it to be (since there is already a lot of optimization going on behind the scenes) or even worse then what the compiler achives for you.


I don't think any compiler would do anything like making your single threaded program multithreaded.
What you think about is probably wide data instructions, like SIMD, that the compiler can insert to process data faster, though still in a single thread.

If you start a new thread, each thread can run an additional line at the same time. Two threads means twice as many lines. Ten threads mean ten times as many lines.


Not really... you can't run more threads in parallell then you have hardware threads in the CPU. 2 per core in an i7 with hyperthreading enabled. though those threads are not really fully parallel either. And you can benefit from more threads then hardware threads, since threads needs to stall sometimes waiting for memory and disk. CPU architecture is complicated Posted Image

Anyhow, processes and threads are not really part of any programming language, they are part of the OS, and are exposed through various API:s to the programming languages.

In a way, you can see the process as your applications "container" in the OS, which keeps record of any memory you have allocated to it, threads started, files opened, etc.
threads are the units of execution in your process, that runs code, and share memory between all other threads in that process.
There is always at least one, the "main thread" (and in c, has the entry point void main(int argc, char** argv) "

Since threads share memory, any memory accesses they do has to be controlled. This is where thread programming can get messy.
The easiest and (when done right) highest performant solution is to make sure that threads are never accessing the same areas in memory at the same time, by having their own copy of everything they need.
You will always need some synchronisation points, but the less you have, the less risk you have for deadlocks and threads just sleeping waiting for other threads to complete.
Synchronisation is done through special objects called "locks", "mutexes", "semaphores", and probably more names I don't recall now. They all work more or less the same though. (lets leave that for later)

Some programming languages have advanced built in features to make it easier to program multithreaded, and structure your multithreaded program, but it all boils down to semaphores, copying data around, and launching thread entry point functions, in the end.

#4Olof Hedman

Posted 14 September 2012 - 07:40 AM

This means that a) you can achive some form of parallelization without actually coding it and b) your results might not be as good as you expect it to be (since there is already a lot of optimization going on behind the scenes) or even worse then what the compiler achives for you.


I don't think any compiler would do anything like making your single threaded program multithreaded.
What you think about is probably wide data instructions, like SIMD, that the compiler can insert to process data faster, though still in a single thread.

If you start a new thread, each thread can run an additional line at the same time. Two threads means twice as many lines. Ten threads mean ten times as many lines.


Not really... you can't run more threads in parallell then you have hardware threads in the CPU. 2 per core in an i7 with hyperthreading enabled. though those threads are not really fully parallel either. And you can benefit from more threads then hardware threads, since threads needs to stall sometimes waiting for memory and disk. CPU architecture is complicated Posted Image

Anyhow, processes and threads are not really part of any programming language, they are part of the OS, and are exposed through various API:s to the programming languages.

In a way, you can see the process as your applications "container" in the OS, which keeps record of any memory you have allocated to it, threads started, files opened, etc.
threads are the units of execution in your process, that runs code, and share memory between all other threads in that process.
There is always at least one, the "main thread".

Since threads share memory, any memory accesses they do has to be controlled. This is where thread programming can get messy.
The easiest and (when done right) highest performant solution is to make sure that threads are never accessing the same areas in memory at the same time, by having their own copy of everything they need.
You will always need some synchronisation points, but the less you have, the less risk you have for deadlocks and threads just sleeping waiting for other threads to complete.
Synchronisation is done through special objects called "locks", "mutexes", "semaphores", and probably more names I don't recall now. They all work more or less the same though. (lets leave that for later)

Some programming languages have advanced built in features to make it easier to program multithreaded, and structure your multithreaded program, but it all boils down to semaphores, copying data around, and launching thread entry point functions, in the end.

PARTNERS