Sign in to follow this  

DirectX11 and Deferred rendering

This topic is 2444 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I don't mean to create yet ANOTHER basic Deferred Rendering thread, but after searching this forum (and google) I felt it was necessary.

I am looking for a tutorial that would walk me step through step of how to set up a deferred render using DirectX11. I have written a forward renderer in DirectX9 (I used DirectX9 for about four years) and have been using DirectX11 for several months. I have seen a few code samples for a Deferred Renderer in DirectX11, however, nothing that really shows me step by step of how to set one up (I have of course done basic rendering and render to texture already).

If there is something that shows a step by guide for DirectX10, I would of course follow that, but if there is, could someone please tell me what improvements DirectX11 has given in the context of Deferred Rendering over DirectX10?

Share this post


Link to post
Share on other sites
This is one of the resources I have found in my research, however, he does have his code in DirectX10.1, which is fine (if there is nothing in DirectX11 out there with a tutorial) and he does not any tutorial on how he set everything up (at least, from what I saw).

Share this post


Link to post
Share on other sites
Couldn't you follow a tutorial which outlines the concept in DirectX9, then apply what you've learnt to DirectX11?

That's what I did. Works a treat. Also, updating all the code to work with DirectX11 ensured I knew exactly what was going on, rather than simply copy and pasting. You should find areas that can be optimized also, as most tutorials are designed to be understood rather than written for performance.

If you understand the concept of deferred shading, you should be able to implement it using just the msdn.

Share this post


Link to post
Share on other sites
Well, the code difference between directx 10 and 11 is very very small. The only difference is you have function calls with an 11, instead of a 10. If you get the directx 10 version working, then just convert it to 11. Should take like.... 30 minutes to do the conversion. But, do not convert before you even get the directx 10 version working --thats asking for compounded problems.


Share this post


Link to post
Share on other sites
Quote:
Original post by simotix
If there is something that shows a step by guide for DirectX10, I would of course follow that, but if there is, could someone please tell me what improvements DirectX11 has given in the context of Deferred Rendering over DirectX10?


Off the top of my head...

1. You can create "read-only" depth stencil views. This is useful if you sample the depth buffer directly for reconstructing position. In D3D10, you couldn't have a depth stencil buffer bound for depth testing AND sample it as a shader resource view simultaneously. Now you can do it if you create a read-only view, and make sure that depth/stencil writes are disabled.

2. This is actually a 10.1 feature, but still useful in 11: you can run pixel shaders at per-sample frequency. Handy for enabling MSAA with a light prepass/deferred lighting setup, since you need to calculate and store lighting at sample resolution.

3. You can take SV_Coverage as an input for the pixel shader. This lets you avoid shading subsamples where the depth/stencil or coverage test failed when MSAA is enabled. Otherwise you would have to run the shader at per-sample frequency. SV_Coverage also gives you a more direct way of detecting edge pixels, rather than using the centroid sampling trick demonstrated in one Humus's samples.

4. Another 10.1 feature: you can sample MSAA depth buffers.

5. You can use compute shaders to perform your lighting on a fixed grid. This is the fastest way to shade many (hundreds) of lights. See this sample.

Share this post


Link to post
Share on other sites
Quote:
Original post by Pthalicus
Couldn't you follow a tutorial which outlines the concept in DirectX9, then apply what you've learnt to DirectX11?


I figured that the benefits from DirectX9 to DirectX11 were so much that it would not matter much to follow a DirectX9 tutorial. Is there any great DirectX9 tutorials for setting up a deferred renderer?

Quote:
Original post by smasherprog
Well, the code difference between directx 10 and 11 is very very small. The only difference is you have function calls with an 11, instead of a 10. If you get the directx 10 version working, then just convert it to 11. Should take like.... 30 minutes to do the conversion. But, do not convert before you even get the directx 10 version working --thats asking for compounded problems.



The problem is that I do not have a DirectX10 renderer written, which I could do given some extra time though.

Share this post


Link to post
Share on other sites
I am still trying to collect difference resources and code on implementing deferred rendering in DirectX11 (or DirectX10). So far I have found two resources for code, the one by humus (although I can not compile it as all the framework code is not available and the one provided by intel (which uses compute shaders). I have seen and been reading several articles on the concept, such as the G-Buffer, but there is several concepts that I feel would be worth seeing done in practice so I may learn more about it. Does anyone have any other available code samples that implement deferred shading? I would rather it be something directly related to a code base made for deferred shading, rather then some larger engine I am trying to rip through to understand.

Share this post


Link to post
Share on other sites
http://www.catalinzima.com/tutorials/deferred-rendering-in-xna/

Check it out. It's in C#/XNA which I've never used before (shaders are in HLSL), but I could easily port it to Directx9 and later Directx10.

Share this post


Link to post
Share on other sites
From reading that and much of the other sources I have found on Deferred Rendering, I am now trying to piece together a Deferred Rendering solution DirectX11. Unfortunately this is going to cause a lot of questions as I have not found anything native to the DirectX11 or DirectX10 API.

The first part I am having an issue with is the creation of a render targets. From what I see, I will need five render targets.

1) Depth render target in D24S8 format
2) Normal/Specular render target in A8R8G8B8 format
3) Diffuse render target in a X8R8G8B8 format
4) Specular render target in a X8R8G8B8 format
5) Emisive to a X8R8G8B8 format (I currently do not use emmisive anywhere, so I may ditch this).

While I was able to translate the depth render target, I am unsure of what "A8R8G8B8" is in DirectX11. From what I can tell there is a "DXGI_FORMAT_R16G16B16A16_FLOAT" which seems like it would be too large for what is needed, and a "DXGI_FORMAT_R8G8B8A8_TYPELESS". It seems as if "DXGI_FORMAT_R8G8B8A8_TYPELESS" is the closest to this and I will need to resolve the type somehow.

D24S8 = DXGI_FORMAT_D24_UNORM_S8_UINT

Are these the correct render target types?

Share this post


Link to post
Share on other sites
I have been continuing my work on deferred rendering in DirectX11 and I have come across a few questions.

1) I know that in DirectX9 there was an issue with mapping texels to pixels, however, I was under the impression that with DirectX11 you did not have to do that anymore. Is this true? The reason I am asking is because I saw it being done here http://www.catalinzima.com/tutorials/deferred-rendering-in-xna/directional-lights/ (which I am following to try and put together a Deferred Renderer).

2) In DirectX9 and XNA you need to call Begin/End on an Effect, what I use shaders in DirectX11 I don't do this, do you need to do anything besides set the shader?

3) Is there away to view the depth stencil texture that is created? For render target textures that I will use as a shader resource I will create them as "BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE". However, when I go to create the depth stencil texture with "BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE" I will get an HRESULT failure from CreateTexture2D dealing with invalid arguments.

Share this post


Link to post
Share on other sites
Quote:
Original post by simotix
I have been continuing my work on deferred rendering in DirectX11 and I have come across a few questions.

1) I know that in DirectX9 there was an issue with mapping texels to pixels, however, I was under the impression that with DirectX11 you did not have to do that anymore. Is this true? The reason I am asking is because I saw it being done here http://www.catalinzima.com/tutorials/deferred-rendering-in-xna/directional-lights/ (which I am following to try and put together a Deferred Renderer).

XNA uses Direct3D9
Quote:

2) In DirectX9 and XNA you need to call Begin/End on an Effect, what I use shaders in DirectX11 I don't do this, do you need to do anything besides set the shader?

The effects framework does some work in Begin/End, such as setting the pixel shader, setting the vertex shader, setting some states, etc.

If you're not using the effects framework, then you need to do anything it would normally do in order to get your shaders working.
Quote:

3) Is there away to view the depth stencil texture that is created? For render target textures that I will use as a shader resource I will create them as "BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE". However, when I go to create the depth stencil texture with "BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE" I will get an HRESULT failure from CreateTexture2D dealing with invalid arguments.

PIX. Or render it out to a quad on your screen with a custom pixel shader, quite trivial really.

Share this post


Link to post
Share on other sites
In D3D9 pixel centers are at integer coordinates, and texels are at 0.5. So when converting between pixels and texels you need to offset by 0.5. In D3D10/D3D11 they're both at 0.5 intervals, so you don't need to do an offset.

If you're creating a texture for a depth stencil buffer that will be used as a shader resource view, you can't create the texture with a depth/stencil format like DXGI_FORMAT_D24_UNORM_S8_UINT (since that format can't be used for SRV's). Instead you have to use an equivalent typeless format. So if you wanted DXGI_FORMAT_D24_UNORM_S8_UINT for your depth buffer, you would use these formats:

Texture2D: DXGI_FORMAT_R24G8_TYPELESS
DSV: DXGI_FORMAT_D24_UNORM_S8_UINT
SRV: DXGI_FORMAT_R24_UNORM_X8_TYPELESS

Share this post


Link to post
Share on other sites
Quote:
Original post by Washu
XNA uses Direct3D9


I understand that XNA uses Direct3D9, but I am unsure if you still need to make texels to pixels in DirectX11. I thought I remember reading that you no longer have to ...

Quote:
Original post by Washu
The effects framework does some work in Begin/End, such as setting the pixel shader, setting the vertex shader, setting some states, etc.

If you're not using the effects framework, then you need to do anything it would normally do in order to get your shaders working.


I DirectX11 I just use the following command for using shaders and I have never had shaders not work. Am I perhaps getting false positives?

m_pD3D11DeviceContext->PSSetShader(((D3D11PixelShader*)pPixelShader)->GetPixelShader(), NULL, 0);

Quote:
Original post by Washu
PIX. Or render it out to a quad on your screen with a custom pixel shader, quite trivial really.



Good suggestion, I did not think of just writing them all to a quad. Isn't it possible to just make another render target and just output it to that also?

Share this post


Link to post
Share on other sites
Quote:
Original post by MJP
In D3D9 pixel centers are at integer coordinates, and texels are at 0.5. So when converting between pixels and texels you need to offset by 0.5. In D3D10/D3D11 they're both at 0.5 intervals, so you don't need to do an offset.

If you're creating a texture for a depth stencil buffer that will be used as a shader resource view, you can't create the texture with a depth/stencil format like DXGI_FORMAT_D24_UNORM_S8_UINT (since that format can't be used for SRV's). Instead you have to use an equivalent typeless format. So if you wanted DXGI_FORMAT_D24_UNORM_S8_UINT for your depth buffer, you would use these formats:

Texture2D: DXGI_FORMAT_R24G8_TYPELESS
DSV: DXGI_FORMAT_D24_UNORM_S8_UINT
SRV: DXGI_FORMAT_R24_UNORM_X8_TYPELESS


I am attempting to do it this way and I noticed that it "works" (I don't notice any rendering differences than if I was using just DXGI_FORMAT_D24_UNORM_S8_UINT for the Texture2D also). I did not see the type anywhere, though, I thought that it was suppose to be specified in the shader?

My attempt to fix this was to do the following
Texture2D<uint> depthMap : register( t0 );

When I did this however I would get the following error when compiling the shader
"error X4582: cannot sample from non-floating point texture formats.".

Do I need to be setting a type, and if so, what should it be? I thought it should be UINT. If so, how should I be sampling the texture?

Share this post


Link to post
Share on other sites
I believe that error is referring to the fact that you can't use a sampler on a UINT texture. Instead, you need to use the .Load resource object method instead, which just directly loads the data instead of trying to sample it.

Share this post


Link to post
Share on other sites
"UNORM" means the integer stored in the texture will automatically be converted to a floating point value of the [0,1] range. So in your shader you don't need to use the "uint" type declaration for your texture (you can use "float" if you want, but it's not necessary for a regular Texture2D). Then just sample the value like the normal texture and you'll get a depth value in the [0, 1] range.

Share this post


Link to post
Share on other sites
I have been working on deferred rendering for some time now and I have managed to generate the color (texture) map and texture map "correctly" so far. However, when I went to go apply a directional light, I will only get a grey texture. I know that by the final image the color and normal map are "correct" because I render them to the screen (I also checked in PIX). In my Directional Light shader I will try to sample the color map and I will only get black, and when I try to sample the normal map I will only get grey. Does anyone know what may be causing this? I am going to post my step by step process of how I am doing this so far as maybe I am doing something wrong else where. If I am doing something wrong or could be doing something better, please let me know.

1) Clear the render target and depth buffer

float ClearColor[4] = { 0.0f, 0.0f, 1.0f, 1.0f };
m_pD3D11DeviceContext->ClearRenderTargetView( m_pD3D11RenderTargetView, ClearColor );
m_pD3D11DeviceContext->ClearDepthStencilView( m_pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0 );



2) Set the G Buffer

m_pD3D11DeviceContext->OMSetRenderTargets(2, m_RenderTargetViews, m_pDepthStencilView);



3) Clear the G Buffer. I do this by Disabling the Z buffer, and setting some default values to the render targets and rendering it to a full screen quad (the shader is below).

m_pDeviceInterface->TurnZBufferOff();
m_pShaderSetter->SetVSShader(m_pShaderManager->GetVertexShaderByIndex(4));
m_pShaderSetter->SetPSShader(m_pShaderManager->GetPixelShaderByIndex(4));
m_FullScreenQuad->Render(m_pRenderUtility);



4) Draw the scene contents

m_pDeviceInterface->TurnZBufferOn();
m_MainScene->RenderStreams(m_pRenderUtility, m_pShaderManager, m_pShaderSetter, m_pTextureManager);



5) Set all the render targets on the device context to NULL

ID3D11RenderTargetView* nullTargets[2] = { NULL, NULL };
m_pD3D11DeviceContext->OMSetRenderTargets(2, nullTargets, m_pDepthStencilView);



6) Start rendering the directional light. Turn off the Z buffer, set the light render target, and set variables related to the directional light shader such as textures (shader is below).

{
m_pDeviceInterface->TurnZBufferOff();
// Set the render target to the light render target
m_pD3D11DeviceContext->OMSetRenderTargets(1, &m_RenderTargetViews[2], m_pDepthStencilView);

// Draw a directional light to the full screen quad.
{
// Clear the background to black
float ClearColor[4] = { 0.0f, 0.0f, 0.0f, 1.0f };
m_pD3D11DeviceContext->ClearRenderTargetView( m_RenderTargetViews[2], ClearColor );

// Use additive blending. AlphaBlendOp = D3D11_BLEND_OP_ADD, SourceBlend = D3D11_BLEND_ONE, DestBlend = D3D11_BLEND_ONE
// blending will be needed for multiple lights
m_pRenderUtility->TurnBlendingOn();

// Set directional light shaders
m_pShaderSetter->SetVSShader(m_pShaderManager->GetVertexShaderByIndex(7));
m_pShaderSetter->SetPSShader(m_pShaderManager->GetPixelShaderByIndex(7));

m_pShaderSetter->UpdateDirectionalLightInformation(D3DXVECTOR3(0.0f, -1.0f, 0.0f), Color(1.0f, 0.0f, 0.0f, 1.0f));
m_pShaderSetter->SetDirectionalLightInformation();

// Get the view proj matrix, then invert it
const D3DXMATRIX*const viewProjMatrix = m_MainScene->GetActiveCamera()->GetViewProjMatrix();
D3DXMATRIX invertexViewProjMatrix;
D3DXMatrixInverse(&invertexViewProjMatrix, NULL, viewProjMatrix);
m_pShaderSetter->UpdateInvertViewProjInformation(invertexViewProjMatrix);
m_pShaderSetter->SetInvertViewProjInformation(1);

Float3 tempCamPosition = m_MainScene->GetActiveCamera()->GetPosition();
D3DXVECTOR3 camPosition(tempCamPosition.x, tempCamPosition.y, tempCamPosition.z);
m_pShaderSetter->SetVSCameraPosition(camPosition, 2);

// Set the texture maps
m_pD3D11DeviceContext->PSSetShaderResources(0, 1, &m_ColorTextureShaderResourceView);
m_pD3D11DeviceContext->PSSetShaderResources(1, 1, &m_NormalTextureShaderResourceView);

m_FullScreenQuad->Render(m_pRenderUtility);

// Blending is no longer needed
m_pRenderUtility->TurnBlendingOff();
}
}



7) Set the back buffer to the original default render target (regular, not the color, normal, depth or light). This step may be unnecessary as I have not begun merge my render targets yet.

8) Render the small rectangles on my screen to display textures.

9) Present to the swap chain.

ClearGBuffer shader

struct VertexShaderInput
{
float4 Position : POSITION;
};

struct VertexShaderOutput
{
float4 Position : SV_POSITION;
};

struct PixelShaderOutput
{
float4 Color : SV_Target0;
float4 Normal : SV_Target1;
float4 Depth : SV_Target2;
};

VertexShaderOutput VS(VertexShaderInput input)
{
VertexShaderOutput output;
output.Position = input.Position;
return output;
}

PixelShaderOutput PS(VertexShaderOutput input) : SV_Target
{
PixelShaderOutput output;
output.Color.rgba = 0.0f;

output.Normal.rgb = 0.5f;
output.Normal.a = 0.0f;
output.Depth = 1.0f;
return output;
}



Directional light shader

Texture2D colorMap : register( t0 );
Texture2D normalMap : register( t1 );
Texture2D depthMap : register( t2 );

SamplerState sampLinearClamp : register( s0 );
SamplerState sampPointClamp : register( s1 );

cbuffer DirectionalLight : register ( b0 )
{
float3 LightDirection;
float3 LightColor;
}

cbuffer InvertViewProj: register ( b1 )
{
matrix InvertViewProjection;
}

cbuffer CameraPosition : register ( b2 )
{
float3 CameraPosition;
}

struct VertexShaderInput
{
float4 Position : POSITION;
float2 TexCoord : TEXCOORD0;
};

struct VertexShaderOutput
{
float4 Position : SV_POSITION;
float2 TexCoord : TEXCOORD0;
};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output = (VertexShaderOutput)0;
output.Position = input.Position;
output.TexCoord = input.TexCoord;
return output;
};

float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target
{
return normalMap.Sample( sampPointClamp, input.TexCoord );
}



As you can see, there is a lot going on here. Below is the result I am currently having when I render all of this. As you can see, the bottom right had a rectangle that is all grey. According to the code above though (or atleast, what I am trying to do). It should be the same as the top left rectangle as I am just trying to display the normal map for (for testing reasons).

Share this post


Link to post
Share on other sites
Quote:
Original post by simotix
In my Directional Light shader I will try to sample the color map and I will only get black, and when I try to sample the normal map I will only get grey. Does anyone know what may be causing this?


Reading this and looking at your screenshots it seems like the texture coordinates for your full screen quad are wrong.

Share this post


Link to post
Share on other sites
Quote:
Original post by adt7
Quote:
Original post by simotix
In my Directional Light shader I will try to sample the color map and I will only get black, and when I try to sample the normal map I will only get grey. Does anyone know what may be causing this?


Reading this and looking at your screenshots it seems like the texture coordinates for your full screen quad are wrong.


That did turn out to be the problem. I fixed the texture coordinates and I was able to display the color and normal map just like I was earlier.

I did however then try to sample the depth map with the following code and it keeps returning just 0, 0, 0, 0. This can't be a texture coordinate problem because I can read the color and normal map just fine.


return depthMap.Sample( sampLinearClamp, input.TexCoord );


I took a look at the depth map texture in pix and it appears too be valid. I also tried to read just the .r value and that was zero. Is there any clues as to what could be causing this?

Depth map in PIX


Share this post


Link to post
Share on other sites
It appears that this is related to the way that I either create the Shader Resource View or use it in the shader. I tried to display it with a basic texture shader (the ones I use for the small rectangle on side also) and it will fail to give anything but 0, 0, 0, 0.

This is how I create the depth duffer and the shader resource view, is there any issues with this?


// Create depth stencil texture
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory( &descDepth, sizeof(descDepth) );
descDepth.Width = m_usRenderWidth;
descDepth.Height = m_usRenderHeight;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R24G8_TYPELESS;
descDepth.SampleDesc.Count = 1;
descDepth.SampleDesc.Quality = 0;
descDepth.Usage = D3D11_USAGE_DEFAULT;
//descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
hr = m_pD3D11Device->CreateTexture2D( &descDepth, NULL, &m_pDepthStencil );
if( FAILED( hr ) )
return false; // false or hr

// Create the depth stencil view
D3D11_DEPTH_STENCIL_VIEW_DESC descDSV;
ZeroMemory( &descDSV, sizeof(descDSV) );
descDSV.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
descDSV.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
descDSV.Texture2D.MipSlice = 0;
hr = m_pD3D11Device->CreateDepthStencilView( m_pDepthStencil, &descDSV, &m_pDepthStencilView );
if( FAILED( hr ) )
return false; // false or hr

{
// Setup the description of the shader resource view
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;

// Create the shader resource view.
hr = m_pD3D11Device->CreateShaderResourceView(m_pDepthStencil,
&shaderResourceViewDesc, &m_DepthTextureShaderResourceView);

if( FAILED(hr) )
return false;
}



Also, here is a basic way that I use it in a shader.


Texture2D txDiffuse : register( t0 );
SamplerState samLinear : register( s0 );

cbuffer ModelViewInfo : register( b0 )
{
matrix World;
matrix View;
matrix Projection;
}

struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD0;
};

struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};

PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
output.Pos.w = 1.0f;
output.Pos = mul( input.Pos, World );
output.Pos = mul( output.Pos, View );
output.Pos = mul( output.Pos, Projection );
output.Tex = input.Tex;

return output;
}


float4 PS( PS_INPUT input) : SV_Target
{
return txDiffuse.Sample( samLinear, input.Tex );
}


Share this post


Link to post
Share on other sites
According to PIX, your depth buffer contains values close to 1.0 for the red channel. This is typical for a perspective projection, where depth is typically in the 0.9-1.0 range. Try moving that slider to the right, about 9/10ths of the way across.

Also you have debug messages, possibly notifying you about errors. Click on the Output Log tab to see them.

Share this post


Link to post
Share on other sites
Quote:
Original post by MJP
According to PIX, your depth buffer contains values close to 1.0 for the red channel. This is typical for a perspective projection, where depth is typically in the 0.9-1.0 range. Try moving that slider to the right, about 9/10ths of the way across.


Ah, that is good to know, I am able to inspect it better now, thank you.

Quote:
Original post by MJP
Also you have debug messages, possibly notifying you about errors. Click on the Output Log tab to see them.


Unfortunately I was reading this warning wrong, I thought it was referring to a different instance.

When I set the DepthTextureShaderResourceView I will get the following warning. I understand why it is happening, but I don't see how I can avoid it. It is the shader resource view I am creating in my previous post, from my depth stencil view.

I am setting the light map render target and depth stencil view like so


m_pD3D11DeviceContext->OMSetRenderTargets(1, &m_RenderTargetViews[2], m_pDepthStencilView);


However, I am then setting the DepthTextureShaderResourceView, which is created by the texture that is used to create the pDepthStencilView. How could I avoid doing this?

This warning
D3D11: WARNING: ID3D11DeviceContext::PSSetShaderResources: Resource being set to PS shader resource slot 2 is still bound on output! Forcing to NULL. [ STATE_SETTING WARNING #7: DEVICE_PSSETSHADERRESOURCES_HAZARD ]

comes after this line is called


m_pD3D11DeviceContext->PSSetShaderResources(2, 1, &m_DepthTextureShaderResourceView);

Share this post


Link to post
Share on other sites

This topic is 2444 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this