The thing is that with threads you've change processor timing. As known, processor (let's take single core CPU) can work with only one program at current time, CPU registers can be used only by one application. For example, DOS was single-application system, there you can run only one application simultaneously. Windows becomes the multi-task system, where you can launch few applications simultaneously. But processor is still one-application unit. The goal is that operating system distributes CPU time between few applications that you've ran.
Both responses where very very helpful. in fact i have experienced this problem with I/O when trying to load geometry. the process of loading geometry hangs up the entire program and the more complex the geometry the longer the wait. So from what i understand threading would allow me to load geometry on the side of my program with out halting the main thread? I am assuming it would still take time but it would not stop one from continuing interaction with the main program being run by the main thread?
When you create additional thread at OS view point this is "sub-application" that can be executed separately from main application or another thread.
Main problems during multi-threading programming are locks and concurrent data modifications.
Lock appears when one thread needs some data, that can be provided only by another thread, but this thread is to busy and can't answer. So, first thread should do nothing or waiting while another thread will not answered.
Concurrent data modification is situation when two or more threads tries change or use the same data. To resolve such problems, for example, in Java use synchronized methods and data.
C++ has all these problems too. But another sense is that you really can distribute processor time and make your application much faster. You just should find those pieces of code, that can be separated and instantiated as thread.