Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Today, 04:08 PM

Posts I've Made

In Topic: D3D11_Create_Device_Debug Question

Yesterday, 10:24 PM

Let's try and keep it friendly and on-topic here.  :)

 

To get back to the question being asked...have you tried forcing the an error from the debug layer? It should be pretty easy to this: just bind a texture as both a render target and a shader resource simultaneously, or use some incorrect parameters when creating a resource. You can also tell the debug layer to break into the debugger on an error or warning, which will ensure that you're not somehow missing the message:

 

ID3D11InfoQueue* infoQueue = nullptr;
DXCall(device->QueryInterface(__uuidof(ID3D11InfoQueue), reinterpret_cast<void**>(&infoQueue)));
infoQueue->SetBreakOnSeverity(D3D11_MESSAGE_SEVERITY_WARNING, TRUE);
infoQueue->SetBreakOnSeverity(D3D11_MESSAGE_SEVERITY_ERROR, TRUE);
infoQueue->Release();

In Topic: Directx 11, 11.1, 11.2 Or Directx 12

23 July 2016 - 03:36 PM

So there's two separate concepts here that you need to be aware of: the API, and the supported feature set. The API determines the set of possible D3D interfaces you can use, and the functions on those interfaces. Which API you can use is primarily dictated by the version of Windows that your program is running on, but it can also be dependent on the driver. The feature set tells you which functionality is actually supported by the GPU and its driver. In general, the API version dictates the maximum feature set that can be available to your app. So if you use D3D11.3 instead of D3D11.0, there are more functions and therefore more potential functionality available to you. However using a newer API doesn't guarantee that the functionality will actually be supported by the hardware. As an example, take GPU's that run on Nvidia's Kepler architecture: their drivers support D3D12 if you run on Windows 10, however if you query the feature level it will report as FEATURE_LEVEL_11_0. This means that you can't use features like conservative rasterization, even though the API supports it.

 

So to answer your questions in order:

 

1. You should probably choose your minimum API based on the OS support. If you're okay with Windows 10 only, then you can just target D3D11.3 or D3D12 and that's fine. If you want to run on Windows 7, then you'll need to support D3D11.0 as your minimum. However you can still support different rendering paths by querying the supported API and feature set at runtime. Either way you'll probably need fallback paths if you want to use new functionality like conservative rasterization, because the API doesn't guarantee that the functionality is supported. You need to query for it at runtime to ensure that your GPU can do it. This is true even in D3D12.

 

Regarding 11.3 vs 12: D3D12 is very very different from D3D11, and generally much harder to use even for relatively simple tasks. I would only go down that route if you think you'll really benefit from the reduced CPU overheard and multithreading capabilities, or if you're looking for an educational experience in keeping up with the latest API's. And to answer your follow up question "does 11.3 hardware support 12 as well", there really isn't any such thing as "11.3 hardware". Like I mentioned earlier 11.3 is just an API, not a mandated feature set. So you can use D3D11.3 to target hardware with FEATURE_LEVEL_11_0, you'll just get runtime failures if you try to use functionality that's not supported.

 

2. You can QueryInterface at runtime to get one interface version from another. You can either do it in advance and store separate pointers for each version, or you can call it as-needed.

 

3. Yes, you can still call the old version of those functions. Just remember that the new functionality may not be supported by the hardware/driver, so you need to query for support. In the case of the constant buffer functionality added for 11.1, you can query by calling CheckFeatureSupport with D3D11_FEATURE_D3D11_OPTIONS, and then checking the appropriate members of the returned D3D11_FEATURE_DATA_D3D11_OPTIONS structure.


In Topic: How To Suppress Dx9 And Sdk 8 Conflict Warnings?

22 July 2016 - 01:33 PM

See this.


In Topic: Compute Shader Output To Stencil Buffer

19 July 2016 - 03:17 PM

You definitely can't directly write into a stencil buffer. Depth-stencil buffers can't be used as UAV's or RTV's, so the only way to write to them is through copies or normal depth/stencil operations. I don't think that you can do it through a copy either. Copying to a resource requires using a format from the same family, and none formats in the same family as depth/stencil formats support UAV's or DSV's.

 

There is the new SV_StencilRef semantic that lets a pixel shader directly specify the stencil ref value, which you could use to write specific values into a stencil buffer. But it's only available in D3D11.3 and D3D12 (Windows 10-only), and I believe it's only supported by AMD hardware at the moment.


In Topic: Blur computer shader, can't figured out warning

10 July 2016 - 06:38 PM

I don't see you setting the PS shader resources anywhere. You basically need to do this:

 

ID3D11ShaderResourceView* nullSRVs[1] = { nullptr };
context->PSSetShaderResourceViews(0, 1, nullSRVs);

PARTNERS