Jump to content

  • Log In with Google      Sign In   
  • Create Account

dr4cula

Member Since 24 Jul 2013
Offline Last Active Dec 03 2014 09:56 AM

Topics I've Started

Level sets and loss of volume

04 October 2014 - 01:50 PM

Hi,

 

I've been trying to convert my simple 2D fluid solver to a liquid solver with a free surface. For that I've introduced the level set method into the application. However, I seem to be losing mass/volume at a ridiculous speed. Here are two videos showing exactly what I mean: I switch between rendering the advected color vector and the the level set where phi < 0 (i.e. the inside of the liquid, rendered as black).

 

https://www.dropbox.com/s/qz7ujya1oyommls/levelset0.mp4?dl=0

https://www.dropbox.com/s/g2pzl121sp9td6g/levelset1.mp4?dl=0

 

From what I've read from the papers, the problem is that after advection, phi doesn't hold the signed distance anymore and needs to be reinitialized. However, I've got no idea how one would do it. Some papers have mentioned fast marching method but from what I can understand, it doesn't suit well for GPUs (my solver is completely GPU based). Some other papers mentioned Eikonal solvers in their references but I literally have no idea what/how to proceed.

 

Any help would be greatly appreciated: bonus points if anyone can link to a tutorial/instructional text that isn't a high-level implementation paper glancing over details.

 

Here's how I've defined the signed distance initially:

float2 p = input.position.xy - pointCenter;
float s = length(p) - radius;
return float4(s, s, s, 1.0f);
 
Thanks in advance!

Volume Rendering: Eye Position to Texture Space?

27 September 2014 - 01:36 PM

Hello,

 

I've been trying to set up a volume renderer but I'm having trouble getting the ray marching data set up correctly. I use a cube which has extents in positive directions, i.e. the vertex definitions look like (1,0,0), (1,1,0) etc which give me implicitly defined texture coordinates where the v-dir needs to be inverted later. Then what I do:

 

1) render cube with front face culling, record fragment distance from eye to the fragments position in the alpha channel (GPU Gems 3 article's source code uses the w-component of the vertex after world-view-projection multiplication), i.e. in pixel shader after interpolation of the per-vertex dist return float4(0.0f, 0.0f, 0.0f, dist);

2) turn on subtractive blending, render cube with back face culling, record negative generated texture coordinates in rgb channel and the distance to this fragment in the alpha channel, i.e. return float4(-texCoord, dist) where texCoord is the original vertex input coordinate.

 

I now have a texture where the rgb channels give the entry point of the ray in texture space and the alpha channel gives the distance through the volume.

 

However, how would I now get the direction vector? GPU Gems 3 says:

 

"The ray direction is given by the vector from the eye to the entry point (both in texture space)."

 

How does one transform the eye position to texture space, so I could put it in a constant buffer for the shader?

 

Thanks in advance!


Constructing min/max corners of a collision mesh

29 August 2014 - 07:34 AM

Hello,
 
I'm having a bit of trouble with constructing the minimum and maximum corners of a collision mesh. I think it's best to explain what steps I'm going through:
 
1) load 3D mesh, assign a world matrix to it (translation, rotation), assign bounding box half extents
2) when a specific event happens in application, generate a 3D bounding box for the mesh using previously defined half extents. This is mainly for debugging as I'm filling the volume defined by this mesh with smaller cubes later. For rendering I'm  generating a cuboid mesh centered around (0,0,0) with the half extents and assign the same world matrix as the original mesh. Rendering this shows the mesh is in the correct place.
3) Now I need to check for collisions only on the xz-plane. To do this, I "construct" a cuboid from min/max corners and transform that to the position of the mesh. Now here is the problem though: I don't know how/where I'm going wrong but the min/max corners don't seem to be right. For example:
mesh half extents on xz (1.05, 0.5) -> constructed min = (-4, -3.5) and max = (-5, -1.45). How can min.x > max.x?
 
Here's the relevant code bit (broadCollisionMeshExtents == half extents):
 
D3DXMATRIX wMat = linkedObj.object->GetWorldMatrix();
D3DXVECTOR2 extents(linkedObj.broadCollisionMeshExtents->x, linkedObj.broadCollisionMeshExtents->z);
 
D3DXVECTOR3 minimum = D3DXVECTOR3(-extents.x, 0.0f, -extents.y);
D3DXVECTOR3 maximum = D3DXVECTOR3(extents.x, 0.0f, extents.y);
D3DXVECTOR4 min4;
D3DXVec3Transform(&min4, &minimum, &wMat);
D3DXVECTOR4 max4;
D3DXVec3Transform(&max4, &maximum, &wMat);
 
minimum = D3DXVECTOR3(min4);
maximum = D3DXVECTOR3(max4);
 
This seems to work as long as the objects don't have any rotation on them. I can sort of force-fix it by doing:
 
float maxX = maximum.x;
float maxZ = maximum.z;
float minX = minimum.x;
float minZ = minimum.z;
maximum = D3DXVECTOR3(max(maxX, minX), 0.0f, max(maxZ, minZ));
minimum = D3DXVECTOR3(min(maxX, minX), 0.0f, min(maxZ, minZ));
 
... but that's just silly. I'm not entirely sure as to what is going on, any help would be greatly appreciated.
 
Thanks in advance!

Dynamic Injection of Particles Using the CS

15 August 2014 - 04:58 AM

Hello,

 

I'm having a bit of trouble implementing a fully GPU based particle system. I've implemented the one similar to what's presented in J. Zink's book and it works fine for 99% of the time. The problem is that some of the particles seem to get stuck: they don't move forward along their given direction vector and flicker for a couple of frames and then suddenly jump forward again. I'm not sure as to what could cause this kind of behavior.

 

In the following shaders, particleCount is from a constant buffer that gets updated with the CopyStructureCount method from the UAV holding the newest data.

 

The update shader:

[numthreads(512, 1, 1)]
void CSMain(uint3 dispatchThreadID : SV_DispatchThreadID) {
       uint threadID = dispatchThreadID.x + dispatchThreadID.y * 512 * dispatchThreadID.z * 512 * 512;
 
       if(threadID < particleCount) {
           Particle p = oldState.Consume();
 
           p.velocity += acceleration * dt;
           p.position += p.velocity * dt;
           p.time += dt;
 
           if(p.time < particleLifetime) {
              newState.Append(p);
           }
      }
}

The particle adding shader:

[numthreads(1, 1, 1)]
void CSMain(uint3 dispatchThreadID : SV_DispatchThreadID) {
 
     if(particleCount < 512) {
 
        Particle p;
        p.position = position.xyz;
        p.velocity = velocity.xyz;
        p.time = 0.0f;
 
        particles.Append(p);
     }
}

Checking with NSight, the particleCount stays at 512 all the time. If I swap the order of operations to Update(); Add();, the value for the constant buffer before update is 512 and after is 513 (i.e. buffer overflow). However, when I use the Add(); Update(); order they seem to be the same.

 

I'd submit a screenshot of what's happening but it's really difficult to capture since the particles are stuck for about 1-2s, sometimes less, sometimes more.

 

Thanks in advance!


Basic Fluid Dynamics

01 August 2014 - 08:29 AM

Hi,

 

I've been trying to get a basic fluid simulation up and running but I've run into a dead end: for several days now I've been trying to find where I've gone wrong and I'm still looking. I started with a 3D Eulerian grid based simulation but debugging that was a nightmare so I've now coded up a 2D version and I can see the issue clearer now (at least I think so). The simulation seems to run fine for the first couple of frames but after that it just seems to stop: I mean the velocity doesn't spread through the fluid (you can see small changes within the area that was initially affected). From what I can tell, the divergence I calculate to find pressure goes to 0 after a few frames (~10). Thus the pressure will also be 0 and then the projection in the end just copies previous frames velocity.

 

Here are a couple of screenshots of different properties: (not scale-corrected colors, e.g. there are negative values for velocity)

velocity: http://postimg.org/image/th0fraz5d/

divergence: http://postimg.org/image/am7wseu6r/

pressure is just a mirrored image of divergence (at least so it looks like, has a bigger area and disappears a bit slower though)

color property (or "dye" if you will): http://postimg.org/image/yun01ufzd/

 

The color/velocity values were written using a simple gaussian splat, with white as color value and (10.0f, 10.0f, 0.0f) as velocity values. Thus the direction seems about correct to me.

 

Since I'm doing the simulation on the GPU, there's a lot of code involved with ping-ponging render targets etc but none of my debugging tools show any issues with the pipeline and all data buffers seem to contain the correct data at each step. So, I'm left wondering if my maths is correct: I've consulted both GPU gems books that had articles about this and checked my versions against theirs and I simply can't find any differences, yet the simulation isn't working as intended.

 

For now, I'll just post my advection for color/velocity and divergence routines as (to me) it seems to be going wrong already at one of these steps.

 

input.position is the pixel position SV_Position. Also, helper func used in both cases (texDim is simulation grid width/height which currently matches the viewport):

float2 PositionToTexCoord(float2 position) {
return float2(position.x / texDim.x, position.y / texDim.y);
}

Advection:

 

float4 PSMain(PSInput input) : SV_TARGET {
float2 center = PositionToTexCoord(input.position.xy);
float2 pos = input.position.xy - dt * velocity.Sample(pointSampler, center).rg;


float2 samplingPoint = PositionToTexCoord(pos);


return qtyToAdvect.Sample(linearSampler, samplingPoint);
}
 
Divergence:
float PSMain(PSInput input) : SV_TARGET { 
float2 left = PositionToTexCoord(input.position.xy - float2(1.0f, 0.0f));
float2 right = PositionToTexCoord(input.position.xy + float2(1.0f, 0.0f));
float2 bottom = PositionToTexCoord(input.position.xy + float2(0.0f, 1.0f));
float2 top = PositionToTexCoord(input.position.xy - float2(0.0f, 1.0f));
 
float4 l = velocity.Sample(pointSampler, left);
float4 r = velocity.Sample(pointSampler, right);
float4 b = velocity.Sample(pointSampler, bottom);
float4 t = velocity.Sample(pointSampler, top);
 
float div = 0.5f * ((r.x - l.x) + (t.y - b.y));
 
return div;
}
 
I really don't know how to go on from here. I'll go and compare the codes again but I've done it so many times already that I doubt I'll find any differences. Hope someone can help me out.
 
EDIT: Just as a note, I haven't implemented boundary conditions yet but I don't think this is the root cause. Nevertheless, I thought I'd mention it just in case.
 
Thanks in advance!

PARTNERS