Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Today, 12:42 AM

#5302864 D3D11_Create_Device_Debug Question

Posted by MJP on 27 July 2016 - 10:24 PM

Let's try and keep it friendly and on-topic here.  :)

 

To get back to the question being asked...have you tried forcing the an error from the debug layer? It should be pretty easy to this: just bind a texture as both a render target and a shader resource simultaneously, or use some incorrect parameters when creating a resource. You can also tell the debug layer to break into the debugger on an error or warning, which will ensure that you're not somehow missing the message:

 

ID3D11InfoQueue* infoQueue = nullptr;
DXCall(device->QueryInterface(__uuidof(ID3D11InfoQueue), reinterpret_cast<void**>(&infoQueue)));
infoQueue->SetBreakOnSeverity(D3D11_MESSAGE_SEVERITY_WARNING, TRUE);
infoQueue->SetBreakOnSeverity(D3D11_MESSAGE_SEVERITY_ERROR, TRUE);
infoQueue->Release();



#5302223 Directx 11, 11.1, 11.2 Or Directx 12

Posted by MJP on 23 July 2016 - 03:36 PM

So there's two separate concepts here that you need to be aware of: the API, and the supported feature set. The API determines the set of possible D3D interfaces you can use, and the functions on those interfaces. Which API you can use is primarily dictated by the version of Windows that your program is running on, but it can also be dependent on the driver. The feature set tells you which functionality is actually supported by the GPU and its driver. In general, the API version dictates the maximum feature set that can be available to your app. So if you use D3D11.3 instead of D3D11.0, there are more functions and therefore more potential functionality available to you. However using a newer API doesn't guarantee that the functionality will actually be supported by the hardware. As an example, take GPU's that run on Nvidia's Kepler architecture: their drivers support D3D12 if you run on Windows 10, however if you query the feature level it will report as FEATURE_LEVEL_11_0. This means that you can't use features like conservative rasterization, even though the API supports it.

 

So to answer your questions in order:

 

1. You should probably choose your minimum API based on the OS support. If you're okay with Windows 10 only, then you can just target D3D11.3 or D3D12 and that's fine. If you want to run on Windows 7, then you'll need to support D3D11.0 as your minimum. However you can still support different rendering paths by querying the supported API and feature set at runtime. Either way you'll probably need fallback paths if you want to use new functionality like conservative rasterization, because the API doesn't guarantee that the functionality is supported. You need to query for it at runtime to ensure that your GPU can do it. This is true even in D3D12.

 

Regarding 11.3 vs 12: D3D12 is very very different from D3D11, and generally much harder to use even for relatively simple tasks. I would only go down that route if you think you'll really benefit from the reduced CPU overheard and multithreading capabilities, or if you're looking for an educational experience in keeping up with the latest API's. And to answer your follow up question "does 11.3 hardware support 12 as well", there really isn't any such thing as "11.3 hardware". Like I mentioned earlier 11.3 is just an API, not a mandated feature set. So you can use D3D11.3 to target hardware with FEATURE_LEVEL_11_0, you'll just get runtime failures if you try to use functionality that's not supported.

 

2. You can QueryInterface at runtime to get one interface version from another. You can either do it in advance and store separate pointers for each version, or you can call it as-needed.

 

3. Yes, you can still call the old version of those functions. Just remember that the new functionality may not be supported by the hardware/driver, so you need to query for support. In the case of the constant buffer functionality added for 11.1, you can query by calling CheckFeatureSupport with D3D11_FEATURE_D3D11_OPTIONS, and then checking the appropriate members of the returned D3D11_FEATURE_DATA_D3D11_OPTIONS structure.




#5302009 How To Suppress Dx9 And Sdk 8 Conflict Warnings?

Posted by MJP on 22 July 2016 - 01:33 PM

See this.




#5301413 Compute Shader Output To Stencil Buffer

Posted by MJP on 19 July 2016 - 03:17 PM

You definitely can't directly write into a stencil buffer. Depth-stencil buffers can't be used as UAV's or RTV's, so the only way to write to them is through copies or normal depth/stencil operations. I don't think that you can do it through a copy either. Copying to a resource requires using a format from the same family, and none formats in the same family as depth/stencil formats support UAV's or DSV's.

 

There is the new SV_StencilRef semantic that lets a pixel shader directly specify the stencil ref value, which you could use to write specific values into a stencil buffer. But it's only available in D3D11.3 and D3D12 (Windows 10-only), and I believe it's only supported by AMD hardware at the moment.




#5300056 Blur computer shader, can't figured out warning

Posted by MJP on 10 July 2016 - 06:38 PM

I don't see you setting the PS shader resources anywhere. You basically need to do this:

 

ID3D11ShaderResourceView* nullSRVs[1] = { nullptr };
context->PSSetShaderResourceViews(0, 1, nullSRVs);



#5299925 Blur computer shader, can't figured out warning

Posted by MJP on 09 July 2016 - 07:26 PM

That means that you have a resource that's bound as an SRV (input) for the pixel shader stage, and you're trying to bind it as a UAV (output) for the compute shader stage. The debug layer always complains about these situations because you're not allowed to have the same resource simultaneously bound as an input and an output. You'll just need to clear out the PS SRV's before you bind your CS UAV's, which you do by binding NULL pointers to the slots.




#5299845 Is it possible “Update” texture in one pass?

Posted by MJP on 08 July 2016 - 10:44 PM

You can't create a UAV for a multisampled texture, so you can't write to one directly from a shader. Even if you could, you can't read from an fp16 UAV texture unless the GPU supports extended formats for UAV reads, and you're running on D3D11.3 or D3D12. 




#5299803 Techniques used for precomputing lightmaps

Posted by MJP on 08 July 2016 - 01:34 PM

I find that path tracers are easy to understand, and can be relatively simple to implement. You can check out Physically Based Rendering if you're looking for a good book on the subject, or you can check out pbrt or Mitsuba if you'd like look at the code for a working implementation. Aside from being straightforward, path tracers have a few nice properties:

 

  • If they're unbiased, then adding more samples will always converge towards the correct result. So adding more rays means better quality. This is not the case for photon mapping, which is a biased rendering method.
  • You can pretty much handle any shading model, and by using importance sampling techniques you can improve convergence as well
  • Depending on how you integrate, it's possible to write a progressive renderer. This means that you can show low-quality results right away, and continuously update those results as more rays come in.

Probably the biggest downside of path tracing is that a simple implementation can be rather noisy compared to some other techniques, especially for certain scenes where the light transport is particularly complicated. At the very least you'll typically need some form of importance sampling, and more complex scenes may require bidirectional path tracing in order to converge more quickly.




#5299437 ddx, ddy reuse for better performance?

Posted by MJP on 06 July 2016 - 11:44 PM

Historically, tex2Dgrad is slower than a normal tex2D even if they have equivalent results. Explicitly specifying gradients potentially requires that the shader core send quite a bit more per-thread data (6 floats vs 2 floats for the 2D case), and on some older GPU's this caused a performance penalty. I'm not sure if it's still slower on newer GPU's, but personally I would still avoid it in order to avoid unneeded register pressure.




#5299435 [Solved][D3D12] Triangle not rendering, vertex data not given to GPU

Posted by MJP on 06 July 2016 - 11:30 PM

The first thing you should do is enable the debug layer before creating your device. The debug layer will output helpful messages when you use the API incorrectly. You can enable it like this:

 

ID3D12Debug* d3d12debug = nullptr;
if(SUCCEEDED(D3D12GetDebugInterface(IID_PPV_ARGS(&d3d12debug))))
{
    d3d12debug->EnableDebugLayer();
    d3d12debug->Release();
    d3d12debug = nullptr;
}



#5299265 Per-Pixel eye vectors for fullscreen quad

Posted by MJP on 05 July 2016 - 11:21 PM

You need to undo your projection, not the view transform. To do that you can take your pixel position in normalized device coordinates (bottom left is (-1,-1), top right is (1,1), transform by the inverse of your projection matrix, and then divide by w. Just make sure you normalize the result, since it won't be a unit vector.




#5298166 New Post about Gamma Correction

Posted by MJP on 26 June 2016 - 06:48 PM

I just came across this excellent presentation that anyone should read if they're interested in color spaces and proper terminology.




#5298011 Basic texturing

Posted by MJP on 25 June 2016 - 12:28 PM

When you declare a texture in your HLSL shader code and compile it with the shader compiler, the compiler will assign the texture to a t# register. There are several register types, but the t# registers are always used for shader resource views. By default, the compiler will assign the registers sequentially based on the order in which you declared your textures. So if you TextureA, TextureB, and TextureC all declared in a row, then they'll get assigned to t0, t1, t2 respectively. You can also explicitly tell the compiler which register you'd like to use by using the "register" keyword, like this:

 

Texture2D ObjTexture : register(t0);

 

Now the reason that the registers are important is because they exactly correspond to the binding slots used for PSSetShaderResources. So if you call PSSetShaderResources with StartSlot set to 3 and NumViews set to to 2, then you will bind shader resource views to registers t3 and t4. In your case, the texture will get assigned to t0 so you can just pass 0 for StartSlot and 1 for NumViews, and then pass along a single-element array containing your shader resource view pointer. 

 

Sampler states work exactly the same way, except that they use a different set of registers and binding slots. Samplers will use registers s0 through s15, and they will correspond to the binding slots of PSSetSamplers.

 

The way that the binding slots work is that they're persistent on your device context even if you change shaders. So if you bind shader A, set 3 textures, and then draw, those same 3 textures will still be bound if you bind shader B. If you want to un-bind those textures, you need to do it by passing an array of NULL pointers to PSSetShaderResources (or by calling ID3D11DeviceContext::ClearState, which will clear all bindings for all shader stages).

 

Finally, one thing to keep in mind for advanced scenarios is that it's possible to query a shader's reflection data to find out which textures exist and which registers they were assigned to. To do that, you need to use the ID3D11ShaderReflection interface and use GetResourceBindingDesc/GetResourceBindingDescByName




#5297634 Two constant buffers - cant get it to work

Posted by MJP on 22 June 2016 - 03:14 PM

Does that shader not emit any warnings on compile?

 

The older versions of the shader compiler (pre-Windows 10) didn't warn you at all about this, they would just silently ignore your register assignment and do it automatically. The latest version of d3dcompiler_47 will give you a proper error message.




#5297399 New Post about Gamma Correction

Posted by MJP on 20 June 2016 - 08:35 PM

I would recommend being careful when explaining what sRGB is. A lot of people are under the mistaken impression that it's just the transfer function (AKA the "gamma curve"), but being a RGB color space it also specifies the chromaticities of the primaries. So you can have the situation where perhaps you use the primaries but not the transfer function, which is what people are usually using when they refer to "linear" space. Or you can have other standards (like Rec. 709) that use the same primaries, but have a different transfer function. You generally don't have to worry about that until you need to work in another color space, and then things can get confusing if you don't understand what the color space is actually specifying. 






PARTNERS