• Advertisement
Sign in to follow this  

DX11 [SOLVED] Post Processing BackBuffer using Effects 11

This topic is 1739 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm currently trying to add post processing to an existing C++ DirectX11 application. The task is to apply a dynamic FX shader loaded from a file directly to the backbuffer, all rendering is already done before. In short: I have to get a copy of the backbuffer, apply some post processing effects and overwrite the buffer with those afterwards. "IDXGISwapChain->Present" is called at last.

 

This was pretty easy to achieve in DirectX9, however I'm running into multiple issues with DirectX11 now. I only found one example for post processing on D3D11 on MSN and it was written for a Windows 8 app, which I cannot build or run on my Windows 7 desktop and the code did not make a lot of sense to me.

The first one, a missing effects framework, could be solved by using "Effects 11" you can download from MSN already.

 

Now I'm a bit confused on how to continue. The basic idea on how it could work is as following:

  1. Load the effect from a ".fx" HLSL file and compile it using "D3DX11CompileEffectFromFile"
  2. Get a copy of the backbuffer using "IDXGISwapChain->GetBuffer" and save it to a "ID3D11Texture" object
  3. Get the description of the texture, to have width and height of the screen
  4. Create a new render target and tell the device to use it
  5. Create a fullscreen quad and draw the screen texture on it (using the loaded effect)
  6. Reset the render target and update the backbuffer with the drawn quad

 

The following code for step one and two already works:

[spoiler]

1:

ID3DX11Effect *g_pEffect;

...

HRESULT hr;
ID3DBlob *m_pCompileBuffer;
ID3DInclude *m_pInclude = new CCustomInclude();

// Load effect file
hr = D3DX11CompileEffectFromFile(_T("shader.fx"), NULL, m_pInclude, NULL, NULL, device, &g_pEffect, &m_pCompileBuffer);
	
if (FAILED(hr))
{
	if (m_pCompileBuffer && m_pCompileBuffer->GetBufferSize() > 0)
	{
	    Log(1, "Error while compiling shader:\r\n\r\n%s\r\n", (char*)m_pCompileBuffer->GetBufferPointer());
	}
	else
	{
	    Log(1, "Error while loading shader");
	}
}

2 & 3:

ID3D11Texture2D *g_pScreenTexture;

...

hr = swapchain->GetBuffer(0, __uuidof(g_pScreenTexture), (PVOID*)&g_pScreenTexture);

if (SUCCEEDED(hr))
{
	D3D11_TEXTURE2D_DESC m_pDesc;
	g_pScreenTexture->GetDesc(&m_pDesc);
}

[/spoiler]

 

I'm having problems with applying the effect and drawing the fullscreen quad though ...

 

The vertex struct and layout for the quad is declared as following:

[spoiler]

// Vertex Structure
struct Vertex
{
	public:
		Vertex() { }
		Vertex(float x, float y, float z, float rhw) : pos(x, y, z, rhw) { }
		Vertex(float x, float y, float z, float rhw, float tex1, float tex2) : pos(x, y, z, rhw), tex(tex1, tex2) { }
		Vertex(float x, float y, float z, float rhw, D3DXVECTOR2 tex) : pos(x, y, z, rhw), tex(tex) { }
		Vertex(D3DXVECTOR4 pos, float tex1, float tex2) : pos(pos), tex(tex1, tex2) { }
		Vertex(D3DXVECTOR4 pos, D3DXVECTOR2 tex) : pos(pos), tex(tex) { }
		Vertex(D3DXVECTOR4 pos) : pos(pos) { }

		static const D3D11_INPUT_ELEMENT_DESC layout[];

	private:
		D3DXVECTOR4 pos;
		D3DXVECTOR2 tex;
};

// Vertex Layout
const D3D11_INPUT_ELEMENT_DESC Vertex::layout[] =
{
	{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
	{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 16, D3D11_INPUT_PER_VERTEX_DATA, 0 }
};

[/spoiler]

 

The fullscreen quad is set up like this:

[spoiler]

ID3D11Buffer *g_pVertexBuffer;

...

// Create the vertex buffer
Vertex quad[] =
{
	Vertex(D3DXVECTOR4(-1.0f, 1.0f, 0.5f, 1.0f), D3DXVECTOR2(0.0f, 0.0f)),
	Vertex(D3DXVECTOR4(1.0f, 1.0f, 0.5f, 1.0f), D3DXVECTOR2(1.0f, 0.0f)),
	Vertex(D3DXVECTOR4(-1.0f, -1.0f, 0.5f, 1.0f), D3DXVECTOR2(0.0f, 1.0f)),
	Vertex(D3DXVECTOR4(1.0f, -1.0f, 0.5f, 1.0f), D3DXVECTOR2(1.0f, 1.0f))
};

D3D11_BUFFER_DESC m_pVertexDesc = { sizeof(Vertex) * ARRAYSIZE(quad), D3D11_USAGE_DEFAULT, D3D11_BIND_VERTEX_BUFFER, 0, 0 };
D3D11_SUBRESOURCE_DATA m_pVertexData = { quad, 0, 0 }; 
device->CreateBuffer(&m_pVertexDesc, &m_pVertexData, &g_pVertexBuffer);

[/spoiler]

 

Drawing currently looks similar to the next code:

[spoiler]

// Update vertex declaration
UINT m_iStrides = sizeof(Vertex);
UINT m_iOffsets = 0;
context->IASetVertexBuffers(0, 1, &g_pVertexBuffer, &m_iStrides, &m_iOffsets);
context->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);

// Begin drawing
D3DX11_TECHNIQUE_DESC m_pTechDesc;
g_pEffect->GetTechniqueByIndex(0)->GetDesc(&m_pTechDesc);

for (UINT iPass = 0; iPass < m_pTechDesc.Passes; iPass++)
{
	g_pEffect->GetTechniqueByIndex(0)->GetPassByIndex(iPass)->Apply(NULL, context);
	context->Draw(4, 0);
}

[/spoiler]

 

I'm clearing the render target to the color red before drawing the quad, but the screen is red then only, so sadly it doesn't seem to draw the quad at all...

I don't know if there is any way to check that.

 

Also, the a post processing shader normally requires me to pass the screen texture to it, so it can apply the effects on every pixel of it. The Effects Framework provides the "ID3DX11Effect->GetVariableByName" and allows one to set the variable to a specific object / data, but it has no definition for a texture. The nearest thing I found is "Variable->AsShaderResource()->SetResource(&MyResource)", but is that really the most efficient way, to create a shader resource for the screen texture to pass it to the pixel shader?

 

I'm sorry for the tons of code in this post, but I found it the easiest way to show what I got already. In DirectX9 the steps described earlier worked without any problems, but it wasn't required to create a vertex buffer here anyway, so the whole thing was shorter and easier to achieve.

I hope somebody has done something similar / post processing before in D3D11/D3D10 and can help me out here, I would really appreciate it.

 

Thank you and cheers,

Crosire

Edited by Crosire

Share this post


Link to post
Share on other sites
Advertisement

It looks like the declared position format in your vertex layout is too small.  You are using a 4 component float position, but only declaring a 3 component format.  Since your offset in the layout to the texture coordinates is 16 bytes, this is likely the source of your problems.  If you switch to having a DXGI_FORMAT_R32G32B32A32_FLOAT for your position, that should help.

 

Did you get any output in the debug window?  If your shader was expecting a float4 then you vertex layout would not have matched and it should have complained about that...  Is the debug layer enabled in the device you are using?

Share this post


Link to post
Share on other sites
Yep, create your device with the D3D11_CREATE_DEVICE_DEBUG. We should actually make that a big red blinking sticky in this subforum wink.png

The nearest thing I found is "Variable->AsShaderResource()->SetResource(&MyResource)", but is that really the most efficient way, to create a shader resource for the screen texture to pass it to the pixel shader ?

It's the only way. The non-effect counterpart would be ?SSetShaderResources (in your case PSSetShaderResources), so you need to create a shader resource view of your backbuffer texture.

For this to work you also need to create your swapchain with DXGI_USAGE_SHADER_INPUT, not only with DXGI_USAGE_RENDER_TARGET_OUTPUT. Alternatively you could render your scene first to a offscreen texture with both D3D11_BIND_RENDER_TARGET and D3D11_BIND_SHADER_RESOURCE.

Be aware when switching targets and resources that you unbind (set NULL) things first: one can not set a texture as a target and input simultaneously, the (debug) pipeline will complain and undo such an attempt (usually in the non-intended way). This is a bit more of a challenge with the effect framework since you have to find out which slots are currently used, but you can explicitly set the slots with the HLSL register keyword.

Also: Show your shader code, please.

Share this post


Link to post
Share on other sites

Thank you for your answers already! I now set the position layout to a float4, using "DXGI_FORMAT_R32G32B32A32_FLOAT", makes a bit more sense.

 

I'm used to PS 2.0, so the new HLSL format is a bit strange to me and I'm sure there is a mistake in there.

 

The shader is just a simple monochrome testing effect:

[spoiler]

Texture2D colorTex : register(t0);
SamplerState _sampler { Filter = MIN_MAG_MIP_POINT; AddressU = Clamp; AddressV = Clamp; };

float4 PostProcess_PS(float3 normal : NORMAL, float2 coord : TEXCOORD0) : SV_TARGET
{
    float4 result = colorTex.Sample(_sampler, coord);
    result.rgb = dot(float3(0.18, 0.41, 0.41), result.rgb);
    result.a = 1.0;
    return result;
}

technique10 PostProcess
{
    pass p0
    {
        SetPixelShader(CompileShader(ps_4_0, PostProcess_PS()));
    }
}

[/spoiler]

Edited by Crosire

Share this post


Link to post
Share on other sites
Use technique11 not technique10 in your effect file. This is a nitpick of the effect framework. Also: Where's your vertex shader ? You need one. While you're at it: Make sure the signatures match: The vertex shader must output the SV_Position system value semantic.

I recommend testing the post process in a separate application (e.g. by loading a screenshot from your app as source) to enable the debug layer. Edited by unbird

Share this post


Link to post
Share on other sites

Thanks so far, you helped me a lot. The quad is now drawing fine and the shader / effect gets loaded and executed too.

 

FX:

[spoiler]

Texture2D colorTex;
SamplerState colorSampler { AddressU = Clamp; AddressV = Clamp; };

struct VS_INPUT
{
    float4 pos : POSITION;
    float2 txcoord : TEXCOORD0;
};
struct VS_OUTPUT
{
    float4 pos : SV_POSITION;
    float2 txcoord : TEXCOORD0;
};

VS_OUTPUT PostProcess_VS(VS_INPUT IN)
{
    VS_OUTPUT OUT;
	
    OUT.pos = IN.pos;
    OUT.txcoord = IN.txcoord;
	
    return OUT;
}

float4 PostProcess_PS(VS_OUTPUT IN) : SV_TARGET
{
    float4 color;

    //color = colorTex.Sample(colorSampler, IN.txcoord);
    //color.rgb = dot(float3(0.18, 0.41, 0.41), color.rgb);
    color.r = 1.0;
    color.g = 0.0;
    color.b = 0.0;
    color.a = 1.0;

    return color;
}

technique11 PostProcess
{
    pass p0
    {
        SetPixelShader(CompileShader(ps_4_0, PostProcess_PS()));
        SetVertexShader(CompileShader(vs_4_0, PostProcess_VS()));
    }
}

[/spoiler]

 

The full screen gets covered in red as expected now. The only thing left is to pass the screen texture to the pixel shader and let it do its job.

 

However, as soon as I remove the comments from the two lines above and comment out the other color code, I just get a black screen. I tried some different shader code and it's always just black, lowering the alpha will show the original image a bit through the now transparent quad, as long as I disable render target clearing.

 

I'm sending the texture to the shader with this code:

ID3D11ShaderResourceView *m_pResource;
D3D11_SHADER_RESOURCE_VIEW_DESC m_pResourceDesc = { DXGI_FORMAT_R32_FLOAT, D3D10_SRV_DIMENSION_TEXTURE2D };
m_pResourceDesc.Texture2D.MipLevels = 1;
m_pResourceDesc.Texture2D.MostDetailedMip = 0;
device->CreateShaderResourceView(g_pScreenTexture11, &m_pResourceDesc, &m_pResource);

g_pEffect11->GetVariableByName("colorTex")->AsShaderResource()->SetResource(m_pResource);

"g_pScreenTexture11" contains the full screen, checked that via "D3DX11SaveTextureToFile". "g_pEffect11" also links to the effect, the other shader code works perfectly fine. The shader layout is the same as the vertex input layout declared in the C++ code too (as seen earlier), so I'm a bit lost here.

 

Thanks again for all help so far, it's greatly appreciated. I'm really close to the target now ... Just this little thing.

 

Cheers,

Crosire

Share this post


Link to post
Share on other sites
Good, you narrowed it somewhat down: The sampling creates black - at least that's what I get.
 
Your easiest approach to narrow down the problem further would be to use a graphics debugger (PIX or the VS 2012 graphics debugger). Check the post-VS values, check if the texture is actually bound (the shader resource view, that is), debug pixels, etc.
 
Alternatively:
  • Check if the texcoords by outputting float4(IN.txcoord, 0, 1). This should give a black-red gradient from left-to-right and a black-green gradient from top to bottom (and yellow bottom right).
  • Your sampler is a bit bare for my taste. Really set all the needed values (not sure what the effects use per default). Or set the sampler explicitly to NULL (you should find the corresponding variable type), this will give you the default sampler. If that gives you something useful, work from those values. For a postprocess effect you e.g. want point sampling.
Currently I don't see anything obvious - yet wink.png

Share this post


Link to post
Share on other sites

Texcoords are set correctly, I get the expected green/red gradient image with no blue and alpha at 100%.

 

I tried it by using "D3DX11CreateShaderResourceViewFromFile" and sent an existing image to the shader by that and I get that image with the effects applied drawn on the screen, so the shader is fully working. It's just not reading the data from the texture object (which has it, tested) and using that one when sending it to the shader.

 

In code:

This works:

ID3D11ShaderResourceView *m_pResource;
D3DX11CreateShaderResourceViewFromFile(device, L"tex.bmp", NULL, NULL, &m_pResource, NULL);
g_pEffect11->GetVariableByName("colorTex")->AsShaderResource()->SetResource(m_pResource);

The code from my previous post not:

ID3D11ShaderResourceView *m_pResource;
D3D11_SHADER_RESOURCE_VIEW_DESC m_pResourceDesc = { DXGI_FORMAT_R32_FLOAT, D3D10_SRV_DIMENSION_TEXTURE2D };
m_pResourceDesc.Texture2D.MipLevels = 1;
m_pResourceDesc.Texture2D.MostDetailedMip = 0;

device->CreateShaderResourceView(g_pScreenTexture11, &m_pResourceDesc, &m_pResource);

g_pEffect11->GetVariableByName("colorTex")->AsShaderResource()->SetResource(m_pResource);

 

After looking at it again, I found some mistakes already:

  • D3D11_SRV_DIMENSION_TEXTURE2D instead of D3D10_SRV_DIMENSION_TEXTURE2D (which outputs the same value in the end (4), but it just makes more sense)
  • DXGI_FORMAT_R32G32B32A32_FLOAT instead of DXGI_FORMAT_R32_FLOAT (The image isn't made of red color only obviously, I want the full RGBA color code here)

Still it doesn't work with the corrected code. Does anybody see another error in here?

 

And I just have to thank unbird again, you pushed me in the right directions :)

Share this post


Link to post
Share on other sites
This is really strange. Can you force the debug layer through the control panel ? Already tried a GPU debugger ? I'm definitively out of clues...

Share this post


Link to post
Share on other sites

I think it might be because the screen texture gets created by the swapchain interface in "IDXGISwapChain->GetBuffer" as I pass a null texture interface object to it. It then probably gets initialized without "D3D11_BIND_SHADER_RESOURCE", which makes it unusable for the shader resource view object.

 

I tried a different approach now:

Code which gets executed on startup:

// Create screen resource
D3D11_TEXTURE2D_DESC m_pTextureDesc = { m_pDesc.Width, m_pDesc.Height, 1, 1, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE, 0, 0 };
device->CreateTexture2D(&m_pTextureDesc, NULL, &g_pScreenTexture11);

// Create shader resource view
D3D11_SHADER_RESOURCE_VIEW_DESC m_pResourceDesc = { m_pTextureDesc.Format, D3D11_SRV_DIMENSION_TEXTURE2D };
device->CreateShaderResourceView(g_pScreenTexture11, &m_pResourceDesc, &g_pScreenView11);

Code which gets executed every frame:

ID3D11Resource *m_pBackBuffer;

hr = swapchain->GetBuffer(0, __uuidof(m_pBackBuffer), (LPVOID*)&m_pBackBuffer);

if (SUCCEEDED(hr) && g_pScreenTexture11)
{
	// Update screen texture
	context->CopyResource(g_pScreenTexture11, m_pBackBuffer);

	// Update effect parameters
	g_pEffect11->GetVariableByName("colorTex")->AsShaderResource()->SetResource(g_pScreenView11);

	...
}

 

I thought the shader view interface only stores a pointer to the screen texture, so it will retrieve the updated data, when I change the texture object it was initalized with.

 

The code now fails at "context->CopyResource(g_pScreenTexture11, m_pBackBuffer);" though. The texture is just empty.

 

I tried this one too:

ID3D11Resource *tex;
g_pScreenSurface11->GetResource(&tex);
D3DX11SaveTextureToFile(context, g_pScreenTexture11, D3DX11_IFF_BMP, L"tex.bmp");

But it crashes the application at the second line already, I have no idea why.

 

I'm going to try and put together a quick testing program as I cannot bind PIX to any other software with my project applied. PIX tries to hook the DirectX Exported Functions which was already done by my hook, so it just crashes.

Edited by Crosire

Share this post


Link to post
Share on other sites

I thought the shader view interface only stores a pointer to the screen texture, so it will retrieve the updated data, when I change the texture object it was initalized with.

Nope, the view is tightly bound to the resource, you can't change that after creation.
 

The code now fails at "context->CopyResource(g_pScreenTexture11, m_pBackBuffer);" though. The texture is just empty.

If you look at the docs of this function you will have to check the compatibility of source and target resource. I already suspected some format problem: Is your backbuffer e.g. really 32F ?
 

I tried this one too:
*snip*
But it crashes the application at the second line already, I have no idea why.

BMP can't cope with float formats, use DDS instead.

I'm going to try and put together a quick testing program as I cannot bind PIX to any other software with my project applied. PIX tries to hook the DirectX Exported Functions which was already done by my hook, so it just crashes.

That leaves bare logging, hopefully. Dump the description of the resources and views in question using GetResource and GetDesc and compare them.

Share this post


Link to post
Share on other sites

If you look at the docs of this function you will have to check the compatibility of source and target resource. I already suspected some format problem: Is your backbuffer e.g. really 32F ?

And that is the problem. DirectX 9 had the "StretchRect" function, which did the job. DirectX 11 is missing any real replacement. I don't see how I can copy the resource of the backbuffer to my texture, which has a different format?!

 

"CopyResource" and "ResolveSubresource" both require compatible formats. If I set up my texture with the same format as the backbuffer, those work of course, but the Shader View is not created properly again (see below).

Whatever I'm trying, either one or the other thing work, but not both together ...

 

BMP can't cope with float formats, use DDS instead.

It's not the save texture line that makes it crash, it's the "GetResource" and that's because the shader resource view object is NULL (got my testing application working and can successfully debug it with the VS 11 graphics debugger) even though it was created earlier.

 

Tried to create it every frame too:

SAFE_RELEASE(g_pScreenSurface11);
device->CreateShaderResourceView(g_pScreenTexture11, NULL, &g_pScreenSurface11);
ID3D11Resource *tex;
g_pScreenSurface11->GetResource(&tex);

Crashes, it is still NULL. It works when I create the texture from a file instead of using the backbuffer, so I asume either the format is the issue or the backbuffer is missing the "D3D11_BIND_SHADER_RESOURCE" flag (which is probably the case).

The solution would be to create a texture with that flag set and copy the backbuffer contents onto it. But all my tries failed there yet, as shown above.

 

I hope I'm not too annoying here smile.png

Edited by Crosire

Share this post


Link to post
Share on other sites
Yeah, you need the copy resource, and you need D3D11_BIND_SHADER_RESOURCE on your target texture to then be able to sample from. Don't see where you fail, so I just reiterate:

- Your backbuffer is given, so D3D11_BIND_SHADER_RESOURCE is not available
- create a compatible texture, same format (and MS type!), same dimension, this time with D3D11_BIND_SHADER_RESOURCE
- create the view thereof
- use it as source for your postprocess

Share this post


Link to post
Share on other sites

Forgive me, I was just stupid. Everything is working now!

 

BIG thanks to you smile.png

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By AxeGuywithanAxe
      I wanted to see how others are currently handling descriptor heap updates and management.
      I've read a few articles and there tends to be three major strategies :
      1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
      2) You have one descriptor heap for an entire pipeline
      3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
      The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
      The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
    • By evelyn4you
      hi,
      until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
      Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
      For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
      My Graphic Card is Directx 12 compatible NVidia GTX 960
      the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
      https://turanszkij.wordpress.com/2017/09/09/skinning-in-compute-shader/
      Now my questions
       is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
        the same question is about the constant buffer of the matrixes
       my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
      for example i could use 2 vertexbuffer bindings
      1 containing only the uv coordinates
      2.containing position and normal
      How do i copy from the RWByteAddressBuffers to the vertexbuffer ?
       
      (Code from turanszkij )
      Here is my shader implementation for skinning a mesh in a compute shader:
      1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 struct Bone { float4x4 pose; }; StructuredBuffer<Bone> boneBuffer;   ByteAddressBuffer vertexBuffer_POS; // T-Pose pos ByteAddressBuffer vertexBuffer_NOR; // T-Pose normal ByteAddressBuffer vertexBuffer_WEI; // bone weights ByteAddressBuffer vertexBuffer_BON; // bone indices   RWByteAddressBuffer streamoutBuffer_POS; // skinned pos RWByteAddressBuffer streamoutBuffer_NOR; // skinned normal RWByteAddressBuffer streamoutBuffer_PRE; // previous frame skinned pos   inline void Skinning(inout float4 pos, inout float4 nor, in float4 inBon, in float4 inWei) {  float4 p = 0, pp = 0;  float3 n = 0;  float4x4 m;  float3x3 m3;  float weisum = 0;   // force loop to reduce register pressure  // though this way we can not interleave TEX - ALU operations  [loop]  for (uint i = 0; ((i &lt; 4) &amp;&amp; (weisum&lt;1.0f)); ++i)  {  m = boneBuffer[(uint)inBon].pose;  m3 = (float3x3)m;   p += mul(float4(pos.xyz, 1), m)*inWei;  n += mul(nor.xyz, m3)*inWei;   weisum += inWei;  }   bool w = any(inWei);  pos.xyz = w ? p.xyz : pos.xyz;  nor.xyz = w ? n : nor.xyz; }   [numthreads(1024, 1, 1)] void main( uint3 DTid : SV_DispatchThreadID ) {  const uint fetchAddress = DTid.x * 16; // stride is 16 bytes for each vertex buffer now...   uint4 pos_u = vertexBuffer_POS.Load4(fetchAddress);  uint4 nor_u = vertexBuffer_NOR.Load4(fetchAddress);  uint4 wei_u = vertexBuffer_WEI.Load4(fetchAddress);  uint4 bon_u = vertexBuffer_BON.Load4(fetchAddress);   float4 pos = asfloat(pos_u);  float4 nor = asfloat(nor_u);  float4 wei = asfloat(wei_u);  float4 bon = asfloat(bon_u);   Skinning(pos, nor, bon, wei);   pos_u = asuint(pos);  nor_u = asuint(nor);   // copy prev frame current pos to current frame prev pos streamoutBuffer_PRE.Store4(fetchAddress, streamoutBuffer_POS.Load4(fetchAddress)); // write out skinned props:  streamoutBuffer_POS.Store4(fetchAddress, pos_u);  streamoutBuffer_NOR.Store4(fetchAddress, nor_u); }  
    • By mister345
      Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?
       
      _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up));
      It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
          m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp
      and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.
      https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
    • By mister345
      Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
      I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
      double clicked on the frame to open it, but no idea where to go from there.
       
      I've been searching for hours and there's no information on this, not even on the Microsoft Website!
      They say "open the  Graphics Pixel History window" but there is no such window!
      Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
      Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?
       
      All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
      and Microsoft's instructions are horrible! Somebody please, please help.
       
       
       

    • By mister345
      I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
      Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
      https://github.com/mister51213/DX11Port_SoftShadows
       
      Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
      https://github.com/mister51213/DX11Port_ShadowMapping
       
  • Advertisement