Jump to content

  • Log In with Google      Sign In   
  • Create Account

dr4cula

Member Since 24 Jul 2013
Offline Last Active Feb 02 2016 01:45 PM

Posts I've Made

In Topic: D3D12 Best Practices

03 October 2015 - 07:52 AM

 

Thought I shouldn't start a new topic as it is kind of generic DX12 question: if I put my CBV in the root signature, does the driver version the data automatically, i.e. I wouldn't need a per-frame constant buffer resource?

 

Thanks!

 

You will still need a per frame constant buffer resource. You just won't need a per frame entry in a descriptor table for that CBV.

 

The only way to not need a constant buffer resources is to store constants directly inside the root signature, but you have very limited root signature memory, so you won't be able to store everything in the root signature.

 

 

Thanks for your quick reply! I was a bit confused due to the wording by MS here:

"By putting something in the root signature, the application is merely handing the versioning responsibility to the driver, but this is infrastructure that they already have."


In Topic: D3D12 Best Practices

02 October 2015 - 04:54 AM

Thought I shouldn't start a new topic as it is kind of generic DX12 question: if I put my CBV in the root signature, does the driver version the data automatically, i.e. I wouldn't need a per-frame constant buffer resource?

 

Thanks!


In Topic: D3D12 Best Practices

20 September 2015 - 09:47 AM

So I've been doing further testing with the samples and when I compare the fullscreen modes in those applications to the ones in DX11, the differences are quite marginal:

 

Avg FPS DX11 (custom app): 8575

Avg FPS DX12 (Bundles app): 4518

 

Both versions use the same presentation model (FLIP_DISCARD), so I'm not sure where the difference comes from? I've only enabled the render target clearing and present operations, everything else is commented out in the apps.

 

Furthermore, following the idea of per frame CBs, if I perform renders to texture, should I queue them also on a per frame basis similar to the constant buffers? (nvm: the Multithreading sample provided me with the answer that yes this is the case indeed)

 

Thanks in advance!


In Topic: D3D12 Best Practices

09 September 2015 - 12:50 PM

Thank you both for answering!

 

I'm still a bit confused about the synchronization though - could you please explain it in more detail?

 

I've implemented the following NextFrame() sync function that is called immediately after Present():

bufIndex_ = swapChain_->GetCurrentBackBufferIndex();
 
frameFences_[currFrameIndex_] = fenceValue_;
THROW_FAILED(directCommandQueue_->Signal(fence_.Get(), fenceValue_));
++fenceValue_;
 
// advance to the next frame
currFrameIndex_ = (currFrameIndex_ + 1) % bufferCount_;
UINT64 lastCompletedFence = fence_->GetCompletedValue();
 
if ((frameFences_[currFrameIndex_] != 0) && (frameFences_[currFrameIndex_] > lastCompletedFence)) {
THROW_FAILED(fence_->SetEventOnCompletion(frameFences_[currFrameIndex_], fenceEvent_));
WaitForSingleObject(fenceEvent_, INFINITE);
}
 
By increasing bufferCount_ to 3, I can achieve 120fps and using 4 I can achieve 180fps but anything higher than that won't net me anything further. Also, I'm not sure how to interpret FPS tracing I've set up, in the sense that if I count the times my Render() loop is reached in one second, I get 180FPS but when I query the actual time passed, I get much higher values for some frames, e.g. see below where the first number is the FPS based on the actual time spent on Render() call and the one in brackets is the number of times Render() got called per second. I assume the 2nd is the actual FPS but I'm not quite sure how to interpret the first one - does it even mean anything? (similar output can be produced in the samples by outputting 1.0f / m_timer.GetElapsedSeconds() instead of GetFramesPerSecond()).
 
FPS: 65 (182)
FPS: 1663 (182)
FPS: 1693 (182)
FPS: 65 (182)
FPS: 1294 (182)
FPS: 2066 (182)
FPS: 63 (182)
FPS: 1058 (182)
FPS: 1739 (182)
FPS: 64 (182)
FPS: 1741 (182)
FPS: 2245 (182)
FPS: 65 (182)

So, I still have no idea how my D3D12 app compares against my D3D11 app: for my D3D11 app, I use the 1.0f / GetElapsedSeconds() method for acquiring max framerate, however what is the equivalent of that in D3D12?

 

Also, I decided to have a look at the GPU execution times with the VS2015 debugger (debug->start diagnostic tools without debugging) but all of the Event Names are Unknown so I can't really tell which API calls are which. Is this feature not yet supported for D3D12?

 

Thanks in advance once more!

 


In Topic: Bezier Teapot Tessellation

07 April 2015 - 11:49 AM

The teapot is "broken", or rather, not everything should be a quad patch. The patches at the top degenerate to triangles. That's why the derivatives can fail and produce those artifacts. I remember "solving" it by clamping the domain values for the derivatives to (e, 1-e) with some small e.

 

Ah! I see, yes, when I was looking at the model in wireframe, indeed the top part looked triangular but I didn't associate this with the fact that tessellator is using the quad domain.

 

Your "solution" worked out great:

static const float epsilon = 1e-5f;
float u = min(max(coordinates.x, epsilon), 1.0f - epsilon);
float v = min(max(coordinates.y, epsilon), 1.0f - epsilon);
Thanks a lot! :)

PARTNERS