• Content count

  • Joined

  • Last visited

Community Reputation

19754 Excellent

About MJP

  • Rank
    XNA/DirectX Moderator & MVP

Personal Information


  • Twitter
  • Github
  1. At first we let the compiler auto-assign the slots to resources, which matched how we did it in D3D11. We would then reflect the shader offline to get the set of textures/buffers that needed to be bound, and at runtime we would generate a matching descriptor table right before drawing. Now we use bindless via global unbounded descriptor tables, and those tables are all manually assigned a register space that matches the root signature.
  2. You can't insert inline assembly into HLSL. The only way to avoid compiling at runtime is to pre-compile offline (or cache compiler outputs at runtime), and create your shaders from the pre-compiled binary data. If you can't modify the code that loads the shaders, then your only options are to somehow patch the executable or DLL, or create a shim DLL that intercepts the call to compile the shader and does something else.
  3. I'm no MonoGame expert, but looking at your code it looks like you mixed up the parameters to Matrix.CreateOrthographicOffCenter() in the stabilized version.
  4. http://www.psa.es/sdg/sunpos.htm The function will take your time of day as well as location on earth (specified as a latitude/longitude coordinate pair), and give you the direction of the sun in spherical coordinates.
  5. In D3D11, Min/Max filtering modes were optional, and had a dedicated cap bit in D3D11_FEATURE_DATA_D3D11_OPTIONS1 that you could check for support. However the docs also stated that Min/Max filtering modes were tied to Tier 2 tiled resource functionality. D3D12 doesn't seem to have a dedicated caps bit, and the docs for D3D12_TILED_RESOURCES_TIER doesn't mention Min/Max filtering at all. A cap bit is mentioned in the docs for D3D12_FILTER, but unfortunately it seems to be partially copy/pasted from the D3D11 docs since it links to the docs for the older D3D11_FEATURE_DATA_D3D11_OPTIONS1 structure. So unless someone from Microsoft can clarify (or the validation layer complains at you), I would probably assume that in D3D12 Min/Max filtering is still tied to D3D12_TILED_RESOURCES_TIER_2. FYI D3D_FEATURE_LEVEL_12_0 implies support for D3D12_TILED_RESOURCES_TIER_2, so you should be okay using Min/Max filtering on your hardware.
  6. DX11 Enumoutputs do not work in Win10

    Which adapter are you enumerating outputs for? You should make sure that it's the adapter for a video card with a display attached (if you have multiple video cards and one of them does not have any displays attached, then it will have 0 DXGI outputs). If you call IDXGIAdapter::GetDesc and check the "Description" member of the returned struct, you can get the name of the adapter. You should also read this bit of documentation about the WARP device on Windows 8+. I would make sure that you're not trying to enumerate displays for the WARP device.
  7. What you described sounds like it should work, so I'm not sure what the issue is. Is there any particular reason why you're mapping and unmapping for each sub-allocation instead of just mapping the entire resource once and offsetting as necessary to write to the sub-buffers?
  8. If it's been auto-promoted, you have to transition from an SRV state to a UAV state.
  9. Calculating Irradiance Map

    So the full monte carlo formula is essentially (1 / NumSamples) * Sum(f(x) / p(x), NumSamples), where f(x) is the function you're integrating, and p(x) is probability distribution function (PDF) evaluated for x. The PDF is basically the likelihood of a particular sample being chosen. For "uniform" sampling schemes where samples are distributed evenly across the whole domain (for instance, the surface of a sphere) the PDF is the same for all samples, and so you can pull it out of the sum. Now for integrating irradiance for a point with normal N, you'll need integrate over the surface of the hemisphere that surrounds N. Uniformly sampling a hemisphere oriented around Z = 1 (the canonical "upper" hemisphere) is pretty simple: // Returns a direction on the hemisphere around z = 1 Float3 SampleDirectionHemisphere(float u1, float u2) { float z = u1; float r = std::sqrt(std::max(0.0f, 1.0f - z * z)); float phi = 2 * Pi * u2; float x = r * std::cos(phi); float y = r * std::sin(phi); return Float3(x, y, z); } As input you need u1 and u2, which are two random variables in the range [0, 1]. These can be generated from a random number generator like rand(), or can be generated by a sequence that produces more a optimal sample distribution that avoids clustering (this is known as Quasi-Monte Carlo sampling). To make this work for our case, we need to sample the hemisphere surrounding the surface normal. This means we need a transformation that can go from tangent space -> world space. If you're doing normal mapping then you probably already have such a matrix, and if not you can build one by starting with the normal as your Z basis and then generating a perpendicular vector to use as the X basis (after that you can use a cross product to generate the Y basis). Then you just transform the hemisphere sample direction by this matrix, and you now have a world space direction.Now if you recall from earlier, we need to divide samples by the PDF of each sample. For uniform hemisphere sampling the PDF is constant for all samples, and it ends up being one over the surface area of a unit hemisphere (1 / (2 * Pi)). So our algorithm ends up looking like this: float3 irradiance = 0.0f; for(uint i = 0; i < NumSamples; ++i) { float u1 = RandomFloat(); float u2 = RandomFloat(); float3 sampleDirInTangentSpace = SampleDirectionHemisphere(u1, u2); float3 sampleDirInWorldSpace = mul(sampleDirInTangentSpace, tangentToWorld); float3 radiance = SampleEnvironment(sampleDirInWorldSpace); irradiance += radiance * saturate(sampleDirInWorldSpace, normalInWorldSpace); } float hemispherePDF = 1.0f / (2 * Pi); irradiance /= (NumSamples * hemispherePDF); Just be aware that if you use this result to light a surface using a Lambertian BRDF, make sure that you include the 1 / Pi term that's part of that BRDF. A lot of people like to bake the 1/ Pi into their light sources which is fine, you just have to be careful to make sure that you're doing it somewhere otherwise your diffuse lighting can be too bright. Once you have this working, you can improve the quality by being smarter about choosing your sample directions. The first way to do that is to use QMC techniques like I mentioned earlier, for instance using Halton sequence or using stratified sampling. You can also just pick values that are evenly spaced out over the [0, 1] domain, which is basically stratified sampling without any jitter (this is essentially what you're doing in the code you posted above). The other way to is to importance sample your function by choosing samples that match the shape of the function being sampled. For irradiance, the common way to do this is to choose sample directions that are proportional to the cosine of the angle between that direction and the surface normal.
  10. D3D12 supports timestamp queries, which lets you track when things execute on the GPU. By issuing pairs of timestamp queries, you can then determine how long it too the GPU to execute the commands in between the queries. The queries give results in terms of ticks, and GetTimestampFrequency will let you convert from ticks to seconds. You can look at my Profiler class if you want to see an example of how to do this.
  11. Yes, you should create the resource in the COMMON state if you intend to initialize its data using the Copy queue. The docs comfirm this. You actually don't need any transition barriers for your scenario, thanks to state decay and promotion. When you access the resource on the copy queue, it will automatically get promoted from COMMON to COPY_DEST. The resource will then decay back to COMMON after the copy command list finishes executing, which means that it will auto-promote to a read state when accessed through an SRV on a graphics queue.
  12. DX11 Member function as callback

    The classic trick for doing this (it's from back in the old MFC days in the 90's) is to associate a pointer with your window using SetWindowLongPtr with GWLP_USERDATA. Then you set up a free function that's a proxy message handler, and that proxy handler grabs the pointer using GetWindowLongPtr. Here's an example from my very simple window wrapper class: LRESULT WINAPI Window::WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch(uMsg) { case WM_NCCREATE: { LPCREATESTRUCT pCreateStruct = reinterpret_cast<LPCREATESTRUCT>(lParam); ::SetWindowLongPtr(hWnd, GWLP_USERDATA, reinterpret_cast<LONG_PTR>(pCreateStruct->lpCreateParams)); return ::DefWindowProc(hWnd, uMsg, wParam, lParam); } } Window* pObj = reinterpret_cast<Window*>(GetWindowLongPtr(hWnd, GWLP_USERDATA)); if(pObj) return pObj->MessageHandler(hWnd, uMsg, wParam, lParam); else return ::DefWindowProc(hWnd, uMsg, wParam, lParam); }
  13. No, there's no sizeof() in HLSL unfortunately.
  14. DX11 XMVECTOR to float

    XMVectorGetX() is fine, you don't need to store the whole vector if you don't want all of the components.
  15. DX11 Temporal Antialising

    You may want to have a look at my MSAA + TAA demo: https://github.com/TheRealMJP/MSAAFilter It implements sub-pixel jittering via translation in the projection matrix, and also uses MSAA sub-sample data (if available) to improve TAA quality. It also implements a higher-quality MSAA resolve then what you get from doing a "normal" hardware resolve on HDR render targets.