• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 [DX12] Swapped component.

This topic is 604 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hmm.

 

So I have this shader that reduce a depth buffer into near/far depth tiles. Also finds the average (mid) depth between near and far and the "nearest before mid" and "furthest after mid"... It then stores a float4(near, far, nearest before mid, furthest after mid) into a texture.

 

The shader works in DirectX11 but does something very strange in DirectX12. The last components of the float4 that I store in the texture are swapped. I mean its like output.xywz instead of output.xyzw. No modification is applied to the shader in DX12.

 

When reading the output texture I do not use components mapping I use D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING.

 

Heres the shader (Im sorry for the macros, I use them to have the option to use reverse depth buffers) :

#define OX_NEAR_DEPTH 0.0f
#define OX_FAR_DEPTH 1.0f
#define OX_NEAREST_DEPTH(A,B) min((A),(B))
#define OX_FARTHEST_DEPTH(A,B) max((A),(B))
#define OX_IS_DEPTH_CLOSER(A,B) ((A)<(B))
#define OX_IS_DEPTH_FURTHER(A,B) ((A)>(B))

float fixDepth(float depth)
{
	return depth != OX_FAR_DEPTH ? depth : OX_NEAR_DEPTH;
}

groupshared float gs_groupDepths[64];
groupshared float gs_groupNearestDepths[32];
groupshared float gs_groupFarthestDepths[32];

Texture2D< float > SrcDepth : register(t0);
RWTexture2D< float4 > DepthTiles : register(u0);

[numthreads(8, 8, 1)]
void cs_computeDepthTiles(
	uint3 DTid : SV_DispatchThreadID,
	uint3 Gid : SV_GroupID,
	uint3 GTid : SV_GroupThreadID,
	uint Gidx : SV_GroupIndex)
{
	const uint threadIndex = GTid.y * 8 + GTid.x;

	// Initialize depths, keep in shared memory
	const float depth = SrcDepth[DTid.xy];
	gs_groupDepths[threadIndex] = depth;
	GroupMemoryBarrierWithGroupSync();

	// 2x downsample with far depth flagging
	if(threadIndex < 32)
	{
		const float d0 = gs_groupDepths[threadIndex];
		const float d1 = gs_groupDepths[threadIndex + 32];
		gs_groupNearestDepths[threadIndex] = OX_NEAREST_DEPTH(d0, d1);
		gs_groupFarthestDepths[threadIndex] = OX_FARTHEST_DEPTH(fixDepth(d0), fixDepth(d1));
	}
	GroupMemoryBarrierWithGroupSync();

	// Parallel reduction
	uint s;
	[unroll]
	for(s = 16; s > 0; s >>= 1)
	{
		if(threadIndex < s)
		{
			// Nearest
			{
				const float d0 = gs_groupNearestDepths[threadIndex];
				const float d1 = gs_groupNearestDepths[threadIndex + s];
				gs_groupNearestDepths[threadIndex] = OX_NEAREST_DEPTH(d0, d1);
			}
			// Farthest
			{
				const float d0 = gs_groupFarthestDepths[threadIndex];
				const float d1 = gs_groupFarthestDepths[threadIndex + s];
				gs_groupFarthestDepths[threadIndex] = OX_FARTHEST_DEPTH(d0, d1);
			}
		}
		GroupMemoryBarrierWithGroupSync();
	}

	// Tile nearest & farthest depth
	const float tileNearest = gs_groupNearestDepths[0];
	const float tileFarthest = gs_groupFarthestDepths[0];

	// Tile mid depth
	const float tileMid = (tileFarthest + tileNearest) * 0.5f;

	// Initialize mid depths
	if(threadIndex < 32)
	{
		const float d0 = gs_groupDepths[threadIndex];
		const float d1 = gs_groupDepths[threadIndex + 32];
		const bool c0 = OX_IS_DEPTH_CLOSER(d0, tileMid);
		const bool c1 = OX_IS_DEPTH_CLOSER(d1, tileMid);
		// Farthest before average depth
		{
			const float f0 = c0 ? d0 : tileNearest;
			const float f1 = c1 ? d1 : tileNearest;
			gs_groupFarthestDepths[threadIndex] = OX_FARTHEST_DEPTH(f0, f1);
		}
		// Nearest past average depth
		{
			const float n0 = c0 ? tileFarthest : d0;
			const float n1 = c1 ? tileFarthest : d1;
			gs_groupNearestDepths[threadIndex] = OX_NEAREST_DEPTH(n0, n1);
		}
	}
	GroupMemoryBarrierWithGroupSync();

	// Parallel reduction
	[unroll]
	for(s = 16; s > 0; s >>= 1)
	{
		if(threadIndex < s)
		{
			// Nearest
			{
				const float d0 = gs_groupNearestDepths[threadIndex];
				const float d1 = gs_groupNearestDepths[threadIndex + s];
				gs_groupNearestDepths[threadIndex] = OX_NEAREST_DEPTH(d0, d1);
			}
			// Farthest
			{
				const float d0 = gs_groupFarthestDepths[threadIndex];
				const float d1 = gs_groupFarthestDepths[threadIndex + s];
				gs_groupFarthestDepths[threadIndex] = OX_FARTHEST_DEPTH(d0, d1);
			}
		}
		GroupMemoryBarrierWithGroupSync();
	}

	// Tile mid depths
	const float tileFarthestBeforeMid = gs_groupFarthestDepths[0];
	const float tileNearestPastMid = gs_groupNearestDepths[0];

	// Output
	if(threadIndex == 0)
		DepthTiles[Gid.xy] = float4(tileNearest, tileFarthest, tileFarthestBeforeMid, tileNearestPastMid);
}

I have the feeling the shader itself is irrelevant to the problem so here's a bit more information.

 

The DepthTiles texture is a 16bits floating point RGBA texture.

The problem is DX12 specific... DX11 is fine with SM5.0.

In DX12, the problem occurs on SM5.0, SM5.1, with warp and hardward adapters.

 

If requested, I can disclose the whole program and or vs graphics debugger captures.

Share this post


Link to post
Share on other sites
Advertisement

Sounds a bit weird to me, especially if it still occurs on SM5.0 and WARP.

 

Probably easiest if you can share the whole program and I'll take a look.

Share this post


Link to post
Share on other sites

Thanks to Adam and the Warp team at Microsoft, this issue is resolved.

 

The problem is (obviously) in my code. I was missing a GroupMemoryBarrierWithGroupSync() here:
 

// Tile nearest & farthest depth
const float tileNearest = gs_groupNearestDepths[0];
const float tileFarthest = gs_groupFarthestDepths[0];

GroupMemoryBarrierWithGroupSync(); // THAT WAS MISSING

// Tile mid depth
const float tileMid = (tileFarthest + tileNearest) * 0.5f;

The reason given by the warp team is :

 

 

 

Without this barrier, earlier dispatch groups can overwrite the GS memory before later groups read the values. The reason it would work on hardware is that most hardware has 32 or more parallel compute units, whereas Warp has only 4.

 

Today I learned that reads and writes to shared memory must be protected by barriers even in less than obvious cases.

 

Cheers!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement