Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


dr4cula

Member Since 24 Jul 2013
Offline Last Active Apr 24 2015 12:19 PM

#5204738 Shallow Water

Posted by dr4cula on 16 January 2015 - 11:04 AM

Hi,

 

I've been experimenting with the shallow water equations but I can't seem to get my implementation correct. I'm following this except I'm doing everything on the GPU. I'm not sure where I keep going wrong: see here. From my experimentations with the full Navier-Stokes equations, this makes sense: I remember getting visually similar results (in 2D) where the circle forms into a square-like corner (plotted a circle with velocity (1,1) at every pixel). But this only happened when I stopped the simulation after the advection step ("skipped" projection). Not sure what is happening here. I've tried changing the signs when sampling data as well as switching the order of operations around but nothing seems to work. At one point I ended up with this, which is obviously not correct.

 

Here are my simulation kernels (I won't post my advection kernel as it is the same one I used for my full NS solver; also note that using a staggered grid whereby a single pixel represents left-bottom pair of velocities for velocity kernels (boundaries are set appropriately to account for the array size differences)):

 

UpdateHeight kernel

float4 PSMain(PSInput input) : SV_TARGET {
float2 texCoord = input.position.xy * recTexDimensions.xy;
 
float vL = velocity.Sample(pointSampler, texCoord).x;
float vR = velocity.Sample(pointSampler, texCoord + float2(recTexDimensions.x, 0.0f)).x;
float vT = velocity.Sample(pointSampler, texCoord + float2(0.0f, recTexDimensions.y)).y;
float vB = velocity.Sample(pointSampler, texCoord).y;
 
float h = height.Sample(pointSampler, texCoord).x;
 
float newH = h - h * ((vR - vL) * recTexDimensions.x + (vT - vB) * recTexDimensions.y) * dt;
 
return float4(newH, 0.0f, 0.0f, 0.0f);
}
 
UpdateU:
float4 PSMain(PSInput input) : SV_TARGET {
float2 texCoord = input.position.xy * recTexDimensions.xy;
 
float u = velocity.Sample(pointSampler, texCoord).x;
 
float hL = height.Sample(pointSampler, texCoord - float2(recTexDimensions.x, 0.0f)).x;
float hR = height.Sample(pointSampler, texCoord).x;
 
float uNew = u + g * (hL - hR) * recTexDimensions.x * dt;
 
return float4(uNew, 0.0f, 0.0f, 0.0f);
}
 
UpdateV:
float4 PSMain(PSInput input) : SV_TARGET {
float2 texCoord = input.position.xy * recTexDimensions.xy;
 
float v = velocity.Sample(pointSampler, texCoord).y;
 
float hB = height.Sample(pointSampler, texCoord - float2(0.0f, recTexDimensions.y)).x;
float hT = height.Sample(pointSampler, texCoord).x;
 
float vNew = v + g * (hB - hT) * recTexDimensions.y * dt;
 
return float4(0.0f, vNew, 0.0f, 0.0f);
}

 

I've literally spent the entire day debugging this and I've got no idea why nothing seems to work... Hopefully some of you guys have implemented this before and can help me out.

 

Thanks in advance!




#5183336 Volume Rendering: Eye Position to Texture Space?

Posted by dr4cula on 27 September 2014 - 01:36 PM

Hello,

 

I've been trying to set up a volume renderer but I'm having trouble getting the ray marching data set up correctly. I use a cube which has extents in positive directions, i.e. the vertex definitions look like (1,0,0), (1,1,0) etc which give me implicitly defined texture coordinates where the v-dir needs to be inverted later. Then what I do:

 

1) render cube with front face culling, record fragment distance from eye to the fragments position in the alpha channel (GPU Gems 3 article's source code uses the w-component of the vertex after world-view-projection multiplication), i.e. in pixel shader after interpolation of the per-vertex dist return float4(0.0f, 0.0f, 0.0f, dist);

2) turn on subtractive blending, render cube with back face culling, record negative generated texture coordinates in rgb channel and the distance to this fragment in the alpha channel, i.e. return float4(-texCoord, dist) where texCoord is the original vertex input coordinate.

 

I now have a texture where the rgb channels give the entry point of the ray in texture space and the alpha channel gives the distance through the volume.

 

However, how would I now get the direction vector? GPU Gems 3 says:

 

"The ray direction is given by the vector from the eye to the entry point (both in texture space)."

 

How does one transform the eye position to texture space, so I could put it in a constant buffer for the shader?

 

Thanks in advance!




#5170934 Basic Fluid Dynamics

Posted by dr4cula on 01 August 2014 - 12:47 PM

float2 bottom = PositionToTexCoord(input.position.xy + float2(0.0f, 1.0f));
float2 top = PositionToTexCoord(input.position.xy - float2(0.0f, 1.0f));

and you are doing

float div = 0.5f * ((r.x - l.x) + (t.y - b.y));

try swapping the +/- where you define bottom and top

 

 

YES! Thank you so much! I guess debugging something for several hours has a tendency to turn out like this >.<

 

The result: http://postimg.org/image/v9kgv9m2b/

 

To be honest though, doesn't the SV_Position semantic go from [0, screenWidth] x [0, screenHeight] from the top left corner? If so, if there's a pixel at (10,10), the bottom one would be (10,11) and top (10,9), wouldn't it?

 

Thanks again!




#5133593 Smart Pointers Confusion

Posted by dr4cula on 22 February 2014 - 10:38 AM

Hello,

 

I was playing around with smart pointers in C++ (VS2010) and encountered some unexpected behavior. Best explained by this code snippet:

// allocate the memory
Test* dynamic = new Test;
 
// take over the ownership
std::unique_ptr<Test> takeOver(dynamic);
 
// wait for ENTER to continue
std::getchar();
 
// clear the memory
takeOver.reset();
 
// shouldn't this cause a crash?
dynamic->reference(1);
 
// oddly enough the following line doesn't cause an immediate crash either
// if I press ENTER next though, it crashes and the debugger says that it might be due to heap corruption
//delete dynamic;
 
std::getchar();

Output I get from the application is the following:

 

Constructing test...
 
Destroying test...
accessing func for owner 1
 
From what I can gather, the memory does get freed since a second delete call causes heap corruption. However, why is the original pointer still valid and prints out the result from that function call?
 
Any help would greatly be appreciated and thanks in advance!



#5106894 DirectX11: Reading raw texture data on the CPU

Posted by dr4cula on 04 November 2013 - 06:09 AM

1314029819767.png
 
When I was loading the textures, I used ZeroMemory() to fill in the D3DX11_IMAGE_LOAD_INFO struct and only set the format, usage and CPU access flags myself. However, this caused the full mipmap chain to be created because
 
 

MipLevels
Type: UINT
The maximum number of mipmap levels in the texture. See the remarks in D3D11_TEX1D_SRV. Using 0 or D3DX11_DEFAULT will cause a full mipmap chain to be created.

 
 
Once I changed the MipLevels value to 1, my original copying code worked. Oops!
 
Thanks for your replies though!




#5100644 DirectX11 entry point

Posted by dr4cula on 11 October 2013 - 03:59 PM

Tried the debug flag and called this right before my draw call:

p_device_->QueryInterface(__uuidof(ID3D11Debug), reinterpret_cast<void**>(&p_debugger_));
HRESULT contextValidation = p_debugger_->ValidateContext(p_deviceContext_);
if(FAILED(contextValidation)) {
MessageBox(NULL, L"Context validation failed.", L"Error", 0);
}

And still nothing...

 

Perf studio frame debugger shows me this: http://i44.tinypic.com/2wfn3ty.png

I tried changing the winding of the triangle's vertices but that didn't change much - the triangle doesn't appear in the window but it does appear in the frame debugger again, though it's a slightly different than the other one.

But the actual frame window is still showing black.

 

EDIT: So, I managed to fix it.I started comparing my configuration to the ones found online and the problem was the following line:

 

p_deviceContext_->ClearDepthStencilView(p_depthStencilView_, D3D11_CLEAR_DEPTH, 1.0f, 0);

 

I had 0.0f instead of 1.0f for the clear value, which I assume didn't match with the OP function set for the depth buffer. The depth buffer has screwed me over in OpenGL and I guess it keeps haunting me in DirectX as well :P Though, Perf studio output still is odd, considering the actual result is this: http://i43.tinypic.com/dxctva.png

 

Anyways, Juliean, thanks for the great tool suggestions! Couldn't try out nvidia's one because I needed an account (need to wait for verification) but definitely will try it out since I have an nvidia card anyways.




PARTNERS