# DX11 [DX11] Unbinding render target for use in next pass

This topic is 2443 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm running a shader that writes to both the back buffer and a render target. I would then like to run a second shader that uses that render target as an input texture. This is working fine in debug mode, but does not work in release mode. I have a feeling that I am not correctly unbinding the render target after the first pass. In DX9 you would do the following, pDevice->SetRenderTarget( 1, NULL ), to unbind your render target. Is there a DX11 equivalent to this?

I am currently doing the following:

// Set the back buffer and the one render target
pDeviceContext->OMSetRenderTargets( 2, rTargets, pDepthStencilView );

// Draw geometry

// Set the back buffer as the only target ( the render target in the above step will be used as an input texture
pDeviceContext->OMSetRenderTargets( 1, &pBackBufferView, pDepthStencilView );

// Do Stuff...the input texture is blank in release mode...okay in debug mode.

##### Share on other sites
In your case, you have to do this:

 ID3D11RenderTargetView* rTargets[2] = { pBackBufferView, NULL }; pDeviceContext->OMSetRenderTargets( 2, rTargets, pDepthStencilView ); 

Otherwise the second render target will remain bound, since you're specifying that you only want to set the first slot and not the second.

##### Share on other sites

In your case, you have to do this:

 ID3D11RenderTargetView* rTargets[2] = { pBackBufferView, NULL }; pDeviceContext->OMSetRenderTargets( 2, rTargets, pDepthStencilView ); 

Otherwise the second render target will remain bound, since you're specifying that you only want to set the first slot and not the second.

Thank you, I didn't realize that was how it worked.

I'm still having the same issue even with that fix, but at least that is one less thing that is wrong. Are there any other things I need to keep in mind when it comes to using render targets as input textures?

##### Share on other sites

Are there any other things I need to keep in mind when it comes to using render targets as input textures?

Make sure you create the render target with the D3D11_BIND_SHADER_RESOURCE flag

##### Share on other sites

[quote name='myers80' timestamp='1307739573' post='4821859']
Are there any other things I need to keep in mind when it comes to using render targets as input textures?

Make sure you create the render target with the D3D11_BIND_SHADER_RESOURCE flag
[/quote]

Yep, I have them set with that flag and D3D11_BIND_RENDER_TARGET. So there is nothing I need to do after running the first shader and before starting the second shader except to call OMSetRenderTargets and make sure my input texture is not included on that list? Maybe I'm chasing up the wrong tree...why does release mode have to be so picky.

##### Share on other sites
Well let's back up a bit here. What does "does not work" mean? Also, are you creating the device with the DEBUG flag? If you do that, you will get warning/error messages in your debug output about runtime issues.

##### Share on other sites

Well let's back up a bit here. What does "does not work" mean? Also, are you creating the device with the DEBUG flag? If you do that, you will get warning/error messages in your debug output about runtime issues.

Yeah, I have the debug flag set and there are no warnings or errors. Basically what I'm trying to do is draw an object to the back buffer and my render target at the same time. I am then attempting to draw my render target to the screen on a textured quad ( so I can see what was drawn to the render target ).

I've been doing some more testing, and in release mode the textured quad does display the render target's back color. But it is only displaying the back color, none of the geometry that I drew to the render target is showing. Here is the basic outline of my code, I can give more detailed code if that would help,

// Clear the back buffer.
pDeviceContext->ClearRenderTargetView( pBackBufferView, clearColor );

// Clear the depth buffer.
pDeviceContext->ClearDepthStencilView( pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0 );

// Set the render targets ( Slot 0 contains the back buffer, slot 1 is my render target texture
pDeviceContext->OMSetRenderTargets( numRenderTargets, rTargets, pDepthStencilView );

// Clear the render targets ( in this case just one )
for( u32 i = 0; i < numRenderTargets; ++i )
pDeviceContext->ClearRenderTargetView( rTargets, clearColor );

// Load vertex and index buffers with geometry

// Draw the geometry using a simple shader that colors the objects one color
pDeviceContext->IASetInputLayout( pLayout );
pDeviceContext->DrawIndexed( indexCount, 0, 0 );

/////////////////////////////////////////////////////////////////////////////////////////////////////
cbuffer PixelBuffer {
float4 color;
};

struct VS_OUTPUT {
float4 position : SV_POSITION;
};

struct PS_OUTPUT {
float4 color : SV_Target0;
float4 color2 : SV_Target1;
};

PS_OUTPUT ColorPixelShader( VS_OUTPUT input ) {
PS_OUTPUT output = (PS_OUTPUT)0;

output.color = color;
output.color2 = color

return output;
}

//////////////////////////////////////////////////////////////////////////////////////////////////////
// Back to C++

// Set the back buffer view in the first slot and the rest to NULL
pDeviceContext->OMSetRenderTargets( D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT, rTargets, pDepthStencilView );

// Load vertex and index buffers with the geometry for a textured quad

// Draw a simple textured quad (in this case the texture is the render target from earlier )
pDeviceContext->IASetInputLayout( pLayout );
pDeviceContext->PSSetSamplers( 0, 1, &sampleState );
pDeviceContext->DrawIndexed( indexCount, 0, 0 );

pSwapChain->Present(1, 0);

##### Share on other sites
Spent all weekend reorganizing and cleaning up the code, and at some point I fixed the issue! No really sure exactly what was wrong but I'm thinking some uninitialized memory, which has bitten me in release mode more than once. Good lesson for myself, first clean up the code before going bug hunting ;)

Thanks MJP and TiagoCosta for the help.

##### Share on other sites
You should use PIX (or PerfHUD if you have a NVIDIA GPU) when debugging your application... It allows you to see what each render target contains after/before each draw call and debug you shaders... Because you could either be incorrectly rendering to you render target, or incorrectly rendering the quad...

• 9
• 12
• 10
• 10
• 11
• ### Similar Content

• I am trying to draw a screen-aligned quad with arbitrary sizes.

currently I just send 4 vertices to the vertex shader like so:
pDevCon->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
pDevCon->Draw(4, 0);

then in the vertex shader I am doing this:
float4 main(uint vI : SV_VERTEXID) : SV_POSITION
{
float2 texcoord = float2(vI & 1, vI >> 1);
return float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1);
}
that gets me a screen-sized quad...ok .. what's the correct way to get arbitrary sizes?...I have messed around with various numbers, but I think I don't quite get something in these relationships.
one thing I tried is:

float4 quad = float4((texcoord.x - (xpos/screensizex)) * (width/screensizex), -(texcoord.y - (ypos/screensizey)) * (height/screensizey), 0, 1);

.. where xpos and ypos is number of pixels from upper right corner..width and height is the desired size of the quad in pixels
this gets me somewhat close, but not right.. a bit too small..so I'm missing something ..any ideas?

.
• By Stewie.G
Hi,
I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis.
I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is.
Here are my passes:
D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur

PS_BlurV

PS_BlurH

P0 + P1

As you can see, it does not work at all.
I think the issue is in my BlendState, but I am not sure.
I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-)

Thanks!

• Back around 2006 I spent a good year or two reading books, articles on this site, and gobbling up everything game dev related I could. I started an engine in DX10 and got through basics. I eventually gave up, because I couldn't do the harder things.
Now, my C++ is 12 years stronger, my mind is trained better, and I am thinking of giving it another go.
Alot has changed. There is no more SDK, there is evidently a DX Toolkit, XNA died, all the sweet sites I used to go to are 404, and google searches all point to Unity and Unreal.
I plainly don't like Unity or Unreal, but might learn them for reference.
So, what is the current path? Does everyone pretty much use the DX Toolkit? Should I start there? I also read that DX12 is just expert level DX11, so I guess I am going DX 11.
Is there a current and up to date list of learning resources anywhere?  I am about tired of 404s..

• By Stewie.G
Hi,

I've been trying to implement a basic gaussian blur using the gaussian formula, and here is what it looks like so far:
float gaussian(float x, float sigma)
{
float pi = 3.14159;
float sigma_square = sigma * sigma;
float a = 1 / sqrt(2 * pi*sigma_square);
float b = exp(-((x*x) / (2 * sigma_square)));
return a * b;
}
My problem is that I don't quite know what sigma should be.
It seems that if I provide a random value for sigma, weights in my kernel won't add up to 1.
So I ended up calling my gaussian function with sigma == 1, which gives me weights adding up to 1, but also a very subtle blur.
Here is what my kernel looks like with sigma == 1
[0]    0.0033238872995488885
[1]    0.023804742479357766
[2]    0.09713820127276819
[3]    0.22585307043511713
[4]    0.29920669915475656
[5]    0.22585307043511713
[6]    0.09713820127276819
[7]    0.023804742479357766
[8]    0.0033238872995488885

I would have liked it to be more "rounded" at the top, or a better spread instead of wasting [0], [1], [2] with values bellow 0.1.
Based on my experiments, the key to this is to provide a different sigma, but if I do, my kernel values no longer adds up to 1, which results to a darker blur.
I've found this post
... which helped me a bit, but I am really confused with this the part where he divide sigma by 3.
Can someone please explain how sigma works? How is it related to my kernel size, how can I balance my weights with different sigmas, ect...

Thanks :-)

• Is it possible to asynchronously create a Texture2D using DirectX11?
I have a native Unity plugin that downloads 8K textures from a server and displays them to the user for a VR application. This works well, but there's a large frame drop when calling CreateTexture2D. To remedy this, I've tried creating a separate thread that creates the texture, but the frame drop is still present.
Is there anything else that I could do to prevent that frame drop from occuring?