• Create Account

# Helicobster

Member Since 05 Feb 2010
Offline Last Active Feb 19 2016 01:20 AM

### #4980303SPH simulation - pressure force doens't work

Posted by on 15 September 2012 - 12:28 AM

Is there any particular reason you're dividing the pressure force by the particle density?

particles[ i ].velocity += ( particles[ i ].force / particles.density + gravity ) * dt;

That means that as particles pile up, the pressure force will decrease. That's the opposite of what you want to happen.

You should instead divide the force by each particle's mass, such that F = ma. Density is not a substitute for mass in this case. Density shouldn't even be considered as a per-particle value - you're only storing it so that you can sample the density field via a smoothing kernel.

Simply removing that reference to density in the integration step should get you some fluid-like behaviour.

### #4795427Performance regarding dimensions of a tile in pixels

Posted by on 07 April 2011 - 03:00 AM

When textures are stored on the graphics card, they are upscaled to the closest power-of-two size, so 257x257 becomes a 512x12 sized texture in memory. Also in ye olde times graphics card did not support non-power-of-two texture sizes at all.

Nope - most GPUs these days support non-power-of-two texture sizes. If it supports OpenGL 2.0, it'll support any reasonable resolution you can throw at it.

There is still a slight advantage to power-of-two texture sizes, though. When creating MIP-maps for trilinear-filtered texturing, each level of the MIP-map is half the size of the one before it. If the texture's size is a power of two, then each pixel of each MIP-map level can be made by simply averaging four pixels from the level before it. This 'binning' reduction is easier to optimise, and it gives cleaner & more predictable results than resampling the image. It's usually handled by the GPU, but you can still reasonably expect power-of-two textures to load a little faster, and/or look a little better at a distance.

There's no disadvantage to using power-of-two texture sizes, either, so while it's not necessary at all, it's kinda habitual. Much like if you're selling random stuff in a garage sale, you're more likely to offer things for \$5, \$20 or \$100, than for more arbitrary prices like \$7, \$23.45 or \$102. Round numbers are nice.

### #4794086Point / Vector understanding

Posted by on 04 April 2011 - 12:55 AM

I can measure the angle between two vectors, but not between two points. I can measure the length of a vector, but there's no such thing as the length of a point. You can normalize a vector, but not a point. You can scale a vector by 10, but not a point. Those should be enough examples.

Your examples are of things you've decided that you're not going to be able to do, because you've decided in advance to make a distinction between points & vectors. Those of us who do not make that semantic distinction do those things all the time, however, without issue. If point==vector, angles & lengths & scales all become meaningful & obvious automatically. Any possible ambiguities are resolved by the contexts in which they're used.

For example if you're trying to scale a point by 10 because you're scaling vertices of a 3D mesh, it makes perfect sense in that context. There's no context in which you'd need to find the angle between just two points, though, so that issue never arises. Well... not for much longer than it takes to realise what you're doing, anyway, and think of a third point to use.

It sounds like your academic background has left you with unnecessary baggage, and now you're using more code to produce less functionality. You may be quite sure you'd be deemed correct in academic circles, but are you sure that the extra complexity is doing you & your code any good?

### #4780511OpenMP

Posted by on 01 March 2011 - 06:30 AM

```		int  r=0, c=0;
#pragma omp parallel for
for (int i=0;i<M.Nz;i=i+1)
{
c=M.Col[i]-1; r=M.Row[i]-1;
result.V[r]=result.V[r]+M.V[i]*v.V[c];
}
```

You're declaring r and c outside the parallelised loop. That means that inside the loop, all threads are writing to the same c and r variables. So when you go to store the result, there's only a 1/num_threads chance that you're writing it to the right place.

Any variables you're declaring for read/write use in the loop have to be declared/constructed within that loop, and then it should be fine.

```   	#pragma omp parallel for
for (int i=0;i<M.Nz;i=i+1)
{
int c=M.Col[i]-1; int r=M.Row[i]-1;        	//ints declared here!
result.V[r]=result.V[r]+M.V[i]*v.V[c];
}
```

PARTNERS