starting 2 extern programs at the same time

Started by
4 comments, last by Antheus 12 years, 1 month ago
My graphic engine (by using ray tracing) is pretty slow all by itself, so I want to try to split it into other programs running at the same time (= distributed rendering). I once tried "system()" function in another program to start the engine, and it worked.

Unfortunately it only starts 1 program after another when I'm using this code (command in Windows):


#include <iostream>
#include <cstdlib>

using namespace std;

int main()
{
string cmd = "MyGraphicEngine.exe";
system(cmd.c_str());
system(cmd.c_str());

return 0;
}


Is there a possibility to run "MyGraphicEngine.exe" twice at the same time?

lg rumpfi88
Advertisement
Yes, since you are launching an .exe i'll assume you're using Windows, thus:

http://msdn.microsoft.com/en-us/library/windows/desktop/ms682425%28v=vs.85%29.aspx (For other platforms this is done differently)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
thx a lot. If you would also know how to do it on Ubuntu, would you send me an example/link as well?

thx a lot. If you would also know how to do it on Ubuntu, would you send me an example/link as well?


For Linux/Unix you normally do a fork(); (to duplicate the current process) and then exec*(a few different functions to choose from) to replace the newly forked process with the one you specify.


pid_t pID = fork();
if (pID == 0) {
//this is the process created by fork, so we execute here
execl("./MyGraphicEngine", "./MyGraphicEngine", (char*)0); //execl never returns unless there is an error, it overwrites the calling process
} else if (pID<0) {
//for failed
std::cerr "Error message about the failed fork"<<std::endl;
exit(1);
} else {
//code that we only execute on the parent process, fork again to launch another process for example or whatever else you want to do.
pid_t pID2 = fork();
if (pID2==0) {
execl("./MyGraphicEngine", "./MyGraphicEngine", (char*)0);
} else if (pID2<0) {
std:cerr "Error message about the failed fork"<<stdl::endl;
}
}
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
Honest question: are you so experienced that you've deliberately decided to shun threads in favour of processes, or not sufficiently experienced to have even considered using threads at all?

If you're in the second category, you might want to look in to using threads rather than processes. There are some very real advantages to using processes for concurrency, but doing so is typically more work (especially in C-like languages).
// windows command line
start myengine.exe
start myengine.exe

// *nix shell
myengine & myengine &


The above acomplishes exactly the same but out of box. Repeat for number of physical cores.

It obviously relies on ability to pass parameters to an exe to specify how to distribute the work. It won't magically make renderer distributed or such.

This topic is closed to new replies.

Advertisement