# OpenMP beginner

This topic is 2631 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello since I am a very beginner at OpenMP I would appreciate if someone could help me to parallelize the following code. I have tried some things like
#pragma omp parallel for shared(sum)
and
#pragma omp parallel for default(none) private(i,j,sum) shared(Iij, quantity, ne, nx)
but I couldn't manage it to work correctly

 double sum = 0; double quantity; int i,j; for(i = 1; i < ne; i++) for(j = 1; j < nx; j++) { quantity= some_function(i,j); sum = sum + quantity; Iij[i *nx + j] = quantity; } 

thanks a lot

##### Share on other sites
You should prefer to declare variables in the smallest context you can. This is particularly important in this situation, because otherwise OpenMP doesn't know which variables are thread-specific and which are shared.

 double sum = 0; #pragma omp parallel for reduction(+:sum) for(int i = 1; i < ne; i++) { for(int j = 1; j < nx; j++) { double quantity= some_function(i,j); sum = sum + quantity; Iij[i *nx + j] = quantity; } } 

##### Share on other sites
I have simplified it a little

Now I am using the following:

 #pragma omp parallel for for(int i = 1; i <ne; i++) for(int j = 1; j <nx; j++) Iij[i *nx + j] = some_function(i,j); 

But When I calculate the sum of the matrix I get a different result (probably race conditions?)

##### Share on other sites
I can't tell if some_function(i,j) is thread-safe or not. Post a full program. Keep it as short as possible, please.

##### Share on other sites
The some_function(i,j) is a monte carlo integration algorithm that uses recursion. Unfortunately I can't post the code because is copyrighted but this must be the problem because when I change it with a simpler function
I get correct results.

Any suggestions would be appreciated...

##### Share on other sites
It sounds like your program has global state that's messing with the parallelization. Try to find any global variables (or static local variables, or static class members) that some_function might be using. Then there are several things you can do to make the problem go away:
* Use a lock.
* Replace the global object with one instance per thread.
* Refactor the code so the global object is no longer needed.

##### Share on other sites
Hello, thank you very much for the reply you were right there were two static variables and when I removed them I got consistent results with the sequential version. But I decided not to change anything when I tried to understand the algorithm. Those 2 static variables are supposed two be 2 random number generators therefore there is no need to make them local since the consequent numbers each time are independent and therefore they don't affect the stochastic properties of the outcome. The reason that I get slightly different results every time I rum the parallel version is because the integrations are taking place at a different order therefore the sequence of random numbers is different every time.

thanks again

##### Share on other sites
You probably should avoid using a global PRNG anyway. If there is nothing preventing two threads from querying it at the same time, you could get really weird results out. For instance, you could get the same number returned to two threads, or even weirder things, depending on how the PRNG is implemented. At the very least, you should put a lock around access to the PRNG.

It is also nice to be able to reproduce the results of a Monte Carlo simulation, for instance so you can debug a problem, or so you can test that a reimplementation of some part of the program doesn't change results. You could achieve this by replacing the PRNG with a hash function (where you feed i, j and some other numbers from the guts of some_function). That's probably the cleanest method.

1. 1
2. 2
3. 3
Rutin
15
4. 4
khawk
14
5. 5
frob
12

• 9
• 11
• 11
• 23
• 12
• ### Forum Statistics

• Total Topics
633661
• Total Posts
3013219
×