Jump to content

  • Log In with Google      Sign In   
  • Create Account

dr4cula

Member Since 24 Jul 2013
Offline Last Active Feb 02 2016 01:45 PM

Topics I've Started

D3D12 Best Practices

07 September 2015 - 02:11 PM

Hi all,

 

I've been playing around with D3D12 and while going through the samples I've run into a couple of questions that I've yet to find an answer for. The questions I have are the following:

 

1) How many descriptor heaps should an application have?

Is it vital for an app to have one SRV/RT/DSV heap or is it fine to have several smaller heaps related to specific rendering tasks (I'm specifically wondering whether cmdList->SetDescriptorHeaps() can cause any possible cache coherency issues)? I remember reading somewhere that an app should have only one heap of each type but I can't remember where I saw it so my memory might just be letting me down at this point.

 

2) How should constant buffers be handled?

Throughout the samples I found that often times the applications created several constant buffers based on the exact same structure for different draw calls, e.g. instead of using map() on application init and then memcpy() to load per draw-call data into the constant buffer, the apps seemed to create n amount of constant buffers instead and used descriptor tables to handle correct resource referencing. Is that the way it should be done or have I misunderstood something (e.g. see the Bundles example)?

 

3) More generally, how should frame resources be handled?

This follows from the fact that the apps seem to be creating n times the number of resources used per frame: e.g. using double-buffered rendering, constant buffer descriptor heap size is given as 2 * numCBsPerFrame (where numCBsPerFrame is an array of CBs for different draw calls) (number of command lists seem to be allocated in a similar manner). What is the reason for doing this? I think this has something to do with GPU-CPU synchronization: preventing read/write clashes but I'm not sure.

 

4) What would be the suggested synchronization method? I'm currently using the one provided in the HelloWorld samples, i.e. I'm waiting for the GPU to finish before continuing to the next frame. This clearly isn't the way to go as my fullscreen D3D11 app runs at ~6k FPS whereas my D3D12 app runs at ~3k FPS. Furthermore, how would one achieve max framerate in windowed mode (I've seen this video but I don't really follow the logic - taking the last option, wouldn't rendering something for the sake of rendering cause possible stalls? I don't really understand this). Is the swapchain's GetFrameLatencyWaitableObject() useful here?

 

Thanks in advance!


Bezier Teapot Tessellation

07 April 2015 - 10:29 AM

Hi,

 

I've been messing around with the tessellation pipeline and Bezier surfaces and I seem to have run into a strange artefact that I just can't figure out: it only seems to appear for the teapot model (from here) but surely this widely used model isn't broken? Here's what happens when I'm tessellating it. My method is similar to the one presented here slides 23-25, except it seems that the Bezier evaluation function given on those slides is summing u and v basis functions backwards, hence my version evaluates the control points as follows:

int ptIndex = 0;
for(int i = 0; i < 4; ++i) {
for(int j = 0; j < 4; ++j) {
position += patch[ptIndex].position * bu[i] * bv[j];
du += patch[ptIndex].position * dbu[i] * bv[j];
dv += patch[ptIndex].position * bu[i] * dbv[j];
++ptIndex;
}
}
 
Here bu/bv and dbu/dbv are the Bernstein polynomials/derivatives of Bernstein polynomials for the tessellated uv parameter coordinates. If I swap the polynomial multiplication order, e.g. for the position set multiplication to bv[i] * bu[j], then I get the exact same, inside-out model as with the copy-paste code from the given slides (the artefact is still there, just need to move the camera inside of the model to see it). After debugging this for the entire day I'm beginning to think it might be a problem with the model but like I said - seems unlikely considering the popularity of the model. Does anyone have any experience with Bezier teapot tessellation and could chip in? I've tried the other models from Newell's teaset, i.e. the spoon and the cup, and neither seemed to have any similar artefacts. If anyone could recommend any (advanced) test Bezier models, I'd be grateful!
 
Thanks in advance!

Heightfield Normals

13 February 2015 - 12:59 PM

Hi,

 

I've implemented a SW equation solver and everything seems to be working OK, except for the normals. Here's what I mean: http://postimg.org/image/d3kah7ow5/

 

Here's how I'm calculating the normals in the vertex shader:

float3 tangent = float3(1.0f, (r - l) / (2.0f * dx), 0.0f);
float3 bitangent = float3(0.0f, (t - b) / (2.0f * dx), 1.0f);
float3 normal = cross(bitangent, tangent);
normal = normalize(normal);

r/l/t/b are left/right/top/bottom neighbours.

I can fake the normal upwards by scaling the xz components but that doesn't seem like the most optimum way of doing it. Surely there's a better way?

 

Thanks in advance!


Shallow Water

16 January 2015 - 11:04 AM

Hi,

 

I've been experimenting with the shallow water equations but I can't seem to get my implementation correct. I'm following this except I'm doing everything on the GPU. I'm not sure where I keep going wrong: see here. From my experimentations with the full Navier-Stokes equations, this makes sense: I remember getting visually similar results (in 2D) where the circle forms into a square-like corner (plotted a circle with velocity (1,1) at every pixel). But this only happened when I stopped the simulation after the advection step ("skipped" projection). Not sure what is happening here. I've tried changing the signs when sampling data as well as switching the order of operations around but nothing seems to work. At one point I ended up with this, which is obviously not correct.

 

Here are my simulation kernels (I won't post my advection kernel as it is the same one I used for my full NS solver; also note that using a staggered grid whereby a single pixel represents left-bottom pair of velocities for velocity kernels (boundaries are set appropriately to account for the array size differences)):

 

UpdateHeight kernel

float4 PSMain(PSInput input) : SV_TARGET {
float2 texCoord = input.position.xy * recTexDimensions.xy;
 
float vL = velocity.Sample(pointSampler, texCoord).x;
float vR = velocity.Sample(pointSampler, texCoord + float2(recTexDimensions.x, 0.0f)).x;
float vT = velocity.Sample(pointSampler, texCoord + float2(0.0f, recTexDimensions.y)).y;
float vB = velocity.Sample(pointSampler, texCoord).y;
 
float h = height.Sample(pointSampler, texCoord).x;
 
float newH = h - h * ((vR - vL) * recTexDimensions.x + (vT - vB) * recTexDimensions.y) * dt;
 
return float4(newH, 0.0f, 0.0f, 0.0f);
}
 
UpdateU:
float4 PSMain(PSInput input) : SV_TARGET {
float2 texCoord = input.position.xy * recTexDimensions.xy;
 
float u = velocity.Sample(pointSampler, texCoord).x;
 
float hL = height.Sample(pointSampler, texCoord - float2(recTexDimensions.x, 0.0f)).x;
float hR = height.Sample(pointSampler, texCoord).x;
 
float uNew = u + g * (hL - hR) * recTexDimensions.x * dt;
 
return float4(uNew, 0.0f, 0.0f, 0.0f);
}
 
UpdateV:
float4 PSMain(PSInput input) : SV_TARGET {
float2 texCoord = input.position.xy * recTexDimensions.xy;
 
float v = velocity.Sample(pointSampler, texCoord).y;
 
float hB = height.Sample(pointSampler, texCoord - float2(0.0f, recTexDimensions.y)).x;
float hT = height.Sample(pointSampler, texCoord).x;
 
float vNew = v + g * (hB - hT) * recTexDimensions.y * dt;
 
return float4(0.0f, vNew, 0.0f, 0.0f);
}

 

I've literally spent the entire day debugging this and I've got no idea why nothing seems to work... Hopefully some of you guys have implemented this before and can help me out.

 

Thanks in advance!


Renormalizing bilinear interpolation weights

03 January 2015 - 12:24 PM

Hi,

 

I need to change the default bilinear interpolation functionality to only use known values for interpolation by biasing the weights based on the contents of the 4 pixels, e.g. if one of the four pixels is (0.0, 0.0, 0.0, 0.0) then bias the lerp() function towards the nonzero neighbouring pixel. At least that's what I've understood I need to do for extrapolating velocities (stored in textures) for my fluid simulation, direct quote here:

 

 

We then go through the levels from finest to coarsest and obtain velocities by tri-linear interpolation of the velocities of the previous level using only known velocities and renormalizing the interpolation weights accordingly.

 

 

I've written a bilinear interpolation function based on what I've found on the web:

static const float2 recTexDim = float2(1.0f / 48.0f, 1.0f / 48.0f); // as a test case, the source image is 48x48
 
float4 bilerp(float2 texCoord) {
float2 t = frac(texCoord / recTexDim);
 
float4 tl = source.Sample(pointSampler, texCoord);
float4 tr = source.Sample(pointSampler, texCoord + float2(recTexDim.x, 0.0f));
float4 bl = source.Sample(pointSampler, texCoord + float2(0.0f, recTexDim.y));
float4 br = source.Sample(pointSampler, texCoord + float2(recTexDim.x, recTexDim.y));
 
return lerp(lerp(tl, tr, t.x), lerp(bl, br, t.x), t.y);
}
 
float4 PSMain(PSInput input) : SV_TARGET {
float4 custom =  bilerp(input.texCoord);
float4 builtIn = source.Sample(linearSampler, input.texCoord);
 
return abs(custom - builtIn);
}

If my code were correct, the resulting image should be all black (i.e. no difference between the interpolation functions). However I get the following result: http://postimg.org/image/3walbutj1/

 

I'm not sure where I'm going wrong here. Also, have I even understood the "renormalizing interpolation weights" bit correctly?

 

Thanks in advance!


PARTNERS