Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Today, 02:34 AM

#5261082 error using resource barrier from multiple commandlists for same resource

Posted by MJP on 09 November 2015 - 12:35 AM

As I understand it, the different command list types (graphics/direct, compute, and copy) can only deal with resource states that they can understand. So In order to transition to or from a pixel shader resource, you need to use a graphics command list. So you'll need to transition to and from the UAV state on your graphics command list, rather than on your compute command list.


#5261080 RenderTargetView* error

Posted by MJP on 09 November 2015 - 12:28 AM

The reason that OMSetRenderTargets takes a pointer to a pointer (instead of just a pointer), is because it expects an array of pointers to render target views. This is so that you can set multiple render targets simultaneously with 1 API call. It's the same for all of the functions that set shader resource views, samplers, constant buffers, and unordered access views. I usually prefer to make an array on the stack to pass into those functions, like this:

ID3D11RenderTargetView* rtViews[] = { it->RenderTargetSelect() };
d3dContext->OMSetRenderTargets(1, rtViews, nullptr);



#5260938 compile & send me exe

Posted by MJP on 07 November 2015 - 04:11 PM

You should read through this: Where's DXERR.LIB?


#5260200 [D3D12] Minimal Tiled Resources implementation

Posted by MJP on 02 November 2015 - 03:36 PM

You should put that request here! https://github.com/Microsoft/DirectX-Graphics-Samples/issues


That's a good idea! I actually went to go do that, and noticed that there's a new reserved resources sample that was added 7 days ago.


#5260072 Problem with D3D11_INPUT_ELEMENT_DESC

Posted by MJP on 02 November 2015 - 01:12 AM

Since you're using DXGI_FORMAT_R32G32B32A32_FLOAT, the input assembler is going to interpret your color values as-is. In other words, your vertex shader is going to get values of (255, 0, 0), (0, 255, 0), etc. You probably want to change all of those 255.0f's to 1.0f's.

If you were using DXGI_FORMAT_R8G8B8A8_UNORM, then you want want your RGBA values to be 1-byte unsigned integers from 0 to 255. In that case, it would make sense to values of 255, but for float values it probably doesn't make sense.


#5259681 [D3D12] Placed resources

Posted by MJP on 29 October 2015 - 10:53 PM

With committed resources, each resource creates its own heap that only contains that 1 resource. With places resources you create the heap separately, and then specify where a resource is located within that heap. This lets you have multiple resources packed into a single heap, potentially even overlapping with each other. The overlapping part is pretty useful, since it lets potentially save quite a bit of memory for render targets and other resource that are only needed for a portion of a frame. For example, say you have a low-resolution render target that you render your SSAO into, and then later gets applied to your ambient lighting. Once you're done applying the AO, you can re-use that memory for say a post-processing render target by placing the latter resource into the same heap at the same memory location.

Some things to watch out for with placed resources:

* Resources have alignment requirements that must be honored when you place them into a heap, and also affect the expected size of a given resource. For buffers the alignment is always 64KB, but for textures it varies from 4KB to 4MB. You can ask the device for the alignment and size of a resource using GetResourceAllocationInfo.

* There are two "resource heap" tiers that determine what kinds of resources can be mixed within a single tier. See the documentation for D3D12_RESOURCE_HEAP_TIER for more info.

* When two placed resources share the same memory within a heap, you need to use transition barriers before the memory can be re-used. See the documentation on memory aliasing and CreatePlacedResource for more info.

* Residency can be only be controlled on a per-heap basis. This means that you can call Evict/MakeResident on a committed resource, but not on a placed resource. So if you want to evict a placed resource, you have to evict the entire heap. See the docs regarding residency for more info.


#5259355 Antialiasing will stay in the future ?

Posted by MJP on 27 October 2015 - 08:03 PM

I disagree with MarkS. Even if you have pixels that are not individually discernible, aliasing can introduce visible artifacts, e.g. moiré patterns. A pixel should ideally be of the color that is the average color of the area it covers, and a single sample is a poor estimate of the average.


Indeed. In particular the eye is very good at picking up on rapid flickering patterns, even in cases where the display pixel is too small for us to discern the individual colors. Increasing the display and shading resolution is the brute-force way of doing it: there are plenty of ways to improve the appearance with lower cost.


#5258756 When using C++ and including d3d11.lib, do you not use the windows registry a...

Posted by MJP on 23 October 2015 - 06:25 PM

Thanks Hodgman, I tried stepping in to D3D11CreateDevice in VS13 but to no avail, I'll have a go trying maybe with VS2015 and try some other things, otherwise your explanation above is all that really I am after. Thanks for the help.


If you want the symbols for D3D11 (and other Windows DLL's), you need to tell Visual Studio to use the Microsoft Symbol Servers. In VS go to Tools->Options->Symbols, and check the box next to "Microsoft Symbol Servers". Be aware that if you do this, debugging will typically be slower since it will have to pull symbols down from the internet and then load them into the debugger. By default it will cache the symbols locally on your hard drive (in the location under "Cache symbols in this directory"), but even with them cached it will still take some time to load them into the debugger.


#5258755 When using C++ and including d3d11.lib, do you not use the windows registry a...

Posted by MJP on 23 October 2015 - 06:14 PM

The registry has nothing to do with it. When you link to d3d11.lib, you're actually linking to an import lib for d3d11.dll. Unlike a normal static library, an import lib doesn't have all of the compiled code inside of it. Instead it has a bunch of stub functions (one for each function exported by the DLL), where the stubs use OS system calls to load the DLL into memory, get a pointer to corresponding DLL function, and actually call the function. By default using an import lib will cause the DLL to be loaded automatically when your executable starts, at which point the OS uses a standard set of rules to find that DLL. These rules are spelled out in the documentation for LoadLibrary and SetDllDirectory. In almost all cases, the OS will find d3d11.dll inside of your System32 directory, and will load it from there. If for some reason you had a copy of d3d11.dll in the same directory as your executable, then the OS will load that file instead, because the local directory trumps the System32 directory.

The reason you're probably thinking that the registry is involved is because that's how COM server registration works. Basically you put an entry in the registry that says "hey, I have a DLL or executable in this directory that supports creating a COM object with a CLSID of XYZ". Then when some client code calls CoCreateInstance with the appropriate CLSID, the OS knows where to find the DLL that can host the object. D3D11 doesn't actually use these COM mechanisms, since they're actually using what they like to call "COM-lite". Basically they use a COM-style ABI (interfaces with virtual functions), but they don't use all of the crazy COM framework stuff for creating objects, marshalling, etc. Hence why they have factory functions for creating D3D objects (like CreateTexture2D), as opposed to using CoCreateInstance to make them.


#5258196 Limits on where texture resources can be accessed?

Posted by MJP on 20 October 2015 - 04:13 PM

Related question, are there any known compiling performance gains (or disadvantages) when using the newer version of the compiler? Or in other words, in case that several of those versions do work as expected, which would be the fastest (in terms of compilation times)?


I do know that the _46 compiler and onward was many times faster for certain compute shaders that accessed shared memory inside of loops, however I couldn't say for sure which version is the fastest in all cases.


#5258040 Limits on where texture resources can be accessed?

Posted by MJP on 19 October 2015 - 11:15 PM

If you're using the D3DX compilation functions, then the version of the shader compiler that you're using is a few versions out-of-date (d3dcompiler_43.dll). The Windows 8.0 SDK introduced d3dcompiler_46 and d3dcompiler_47, while the Windows 10 SDK includes the latest version (which is confusingly also named d3dcompiler_47.dll). You might want to try using those versions and see if they still have the same behavior. The easiest way to do this is to simply install the Windows 10 SDK (which is included with Visual Studio 2015), and then use the version of fxc.exe that comes with the SDK. If using the new version works, then you might want to switch to the newer compiler. Ideally you would do this by completely switching over to using the D3D headers and libraries from the Windows SDK instead of using the old DirectX SDK, but if that's not an easy option then you can also do it by loading d3dcompiler_47.dll at runtime. You can do this by using LoadLibrary, and then you can use GetProcAddress to get a pointer to the D3DCompile function.


#5257400 writing to UAV buffer

Posted by MJP on 15 October 2015 - 04:02 PM

the code doesn't compile.


What's the error? I'm guessing that it's telling you that your UAV is bound to a register that overlaps with your render target, but I'd rather not guess. smile.png

For pixel shaders, UAV's use the same slots as render target views. As explained by the docs for OMSetRenderTargetsAndUnorderedAccessViews, you'll need to bind your render target to slot 0 and your UAV to slot 1.


#5256799 Directional Light Calculating by using EV100

Posted by MJP on 11 October 2015 - 11:13 PM

Are you suggesting me when I used real-world luminous intensity values for my lighting I MUST use some appropriate exposure for it? But what's the appropriate exposing method for this situation (directional lighting with EV100)? and more exactly , What's the appropriate real-world measuring unit that I SHOULD use for my directional lights to match the natural scene more easily? Can you explain it with a little more details? I would be much appreciated for that.


The short answer is "yes". If each pixel represents the amount of light reflected towards the eye/camera, then you need to apply conversion steps in order to produce an image that looks good. This generally consists of exposure, tone mapping, and gamma correction (with the last two possibly combined into one step). Exposure is really important, because it basically determines the "window" of visible intensities at any given time. To understand what I mean by that, here's a slide from Josh Pines's presentation from SIGGRAPH 2010:

DynamicRange.PNG

Basically your exposure is doing that middle step: it's determining what portion of your dynamic range will get mapped into usable values that you can see on-screen. In your case your scene is going to be to the right of that scale, and so your exposure will need to be set such that the higher values are mapped down into the visible range. If you don't do this, you end up with an over-exposed image like the one that you posted. Another way that you can think of this is that exposure is basically reciprocal to your scene lighting, and always needs to balance it out. So high lighting values require low exposures.

The simplest exposure system is just pick a scalar that you multiply every pixel by. In your case you know that your light intensity is 2048, and so if you pick an exposure of 1/2048 then a purely white diffuse surface will end up being 1.0 after exposure. Another way you can do it is to emulate how exposure in a camera works. This makes sense in a lot of ways, since you're already using physical units thus standard camera formulas can be applied. I believe that the Frostbite paper discusses this a bit, but I would also recommend reading through Padraic Hennessy's excellent blog post on implementing physically based exposure. He even touches on how you can use the "Sunny 16" rule to pick an appropriate exposure for a sunny day.

Also, keep in mind that the "E" in "EV" stands for "exposure", because it's actually a system used for determining which exposure to use! So if you're using camera controls for exposure and your whole scene is lit by a single light that uses EV's, then you already know how to set up an exposure that's appropriate for that lighting!


#5256752 Directional Light Calculating by using EV100

Posted by MJP on 11 October 2015 - 04:00 PM

How are you exposing your scene? If you're going to use real-world luminous intensity values for your directional light, then you will not to match that with an appropriate exposure for it to look like a properly-exposed image.


#5256466 Blinn-Phong artifact in shader

Posted by MJP on 09 October 2015 - 09:37 PM

Thanks!
I kind of liked the scattering of specular around the rim before multiplying with NdotL, but I guess that should not be part of the Blinn calculations but rather some subsurface scattering effect.


You can get a more accurate version of that effect by using a microfacet specular BRDF. Such a BRDF has geometry and fresnel terms that give you interesting off-specular peak behavior, but do so in a way that's more consistent with physics compared what you're using. However this does require more calculations, which can make the shader more expensive.




PARTNERS