# N-body simulator

This topic is 2820 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Dear all,

First of all I am not sure if this post belongs to this topic.

I am trying to program a N-body simulator for scientific purposes using the implementation of the 17th chapter of the book Numerical Recipes 3rd edition witch is about (the chapter) ordinary differential equations.

The code should distribute the work between several cores.

After thinking a little I figured out the following 2 ways:

1st) The most obvious way is to use an integrator (for example adaptive Runge - Kutta 8th order) on 6N equations. The issue with that is that I can't think of a way to parallelize the code (and I wouldn't want to mess up with the implementation which is too complicated)

2nd) The 2nd way is to compute the potential as a sum for every time step and then apply an integrator N times (for every particle) for a const time step dt. Then I update the coordinates and I do the same for the next time step. The advantage is that I can easily distribute the particles equally between the cores of the system, but I am not sure of the accuracy of this method and apart that it seems to me a little redundant.

Please can you comment on the above?

The object is to parallelize without messing a lot with the numerical recipes implementation.

##### Share on other sites
I'm okay with this post, since, although it isn't about game development, it is something that I can see a game wanting to do and also because you've outlined your own candidate technical approaches rather than merely asking for a solution.

That said, I don't have any specific advice for you myself. I will tell you that some researchers at University of North Carolina at Chapel Hill (and likely other places) developed an N body simulation that exploited GPU's, a few years ago. I believe it is published in one of the GPU Gems books (yes...it was GPU Gems 3, available to read online here: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch31.html).

##### Share on other sites
The staple of n-body simulation are Barnes-Hut treecode or Fast multipole method (FMM) is almost required. Some of the smaller problems can be brute forced by GPU. Both algorithms have been adapted to many core systems. There is not much benefit in running either of these on GPU vs. multi-core CPU, they behave similarly performance-wise, but CPU versions scale much further due to more memory and reliance of algorithm on branching.

Accuracy of these is tunable and for typical problems adequate.

GPUs can brute force n-body problems with several thousand bodies, but suffer from n^2 issues. Brute force does not outperform logn algorithms for any reasonable number of n.

##### Share on other sites
The only real difference between the two methods is whether you use an adaptive or constant timestep. In both cases you're essentially doing 6N individual integrations for the given timestep, the only difference is whether you change the timestep between steps.

Using a constant timestep is pretty much standard practice in molecular dynamics where many researchers simply use velocity Verlet. Check out LAMMPS for a popular parallel code that does this. The only problem is that you need to "guess" a timestep that will provide sufficient accuracy throughout the simulation. This isn't a big problem in molecular dynamics where the main issue is convergence and the maximum timestep for convergence won't vary much during a given simulation.

For an adaptive method, I've had good luck with SUNDIALS. All the work is done through an NVECTOR interface which makes parallelization easy enough. It provides a basic parallel implementation of the NVECTOR interface, but you'll probably want to write your own that works with how you'd like to partition your data.

1. 1
2. 2
Rutin
16
3. 3
4. 4
5. 5

• 26
• 11
• 9
• 9
• 11
• ### Forum Statistics

• Total Topics
633708
• Total Posts
3013468
×