Sign in to follow this  
Migi0027

DX11 DX11 - Multiple Render Targets

Recommended Posts

Migi0027    4628

Hi guys, again...

 

I'm having a problem with multiple render targets, where only the first render target gets filled out (has a value).

 

Setting the render targets:

DeferredRender.ApplyShader(dev, devcon);

DeferredRender.mDepth.ClearRenderTarget(devcon, zbuffer, 0, 0, 0, 1);
DeferredRender.mNormals.ClearRenderTarget(devcon, zbuffer, 0, 0, 0, 1);
DeferredRender.mLighting.ClearRenderTarget(devcon, zbuffer, 0, 0, 0, 1);

ID3D11RenderTargetView *deferredViews[3];

deferredViews[0] = DeferredRender.mDepth.m_renderTargetView;
deferredViews[1] = DeferredRender.mNormals.m_renderTargetView;
deferredViews[2] = DeferredRender.mLighting.m_renderTargetView;

devcon->OMSetRenderTargets(3, deferredViews, zbuffer);

RenderScene(STATE_DEFERRED);

gBuffer_Depth = DeferredRender.mDepth.GetShaderResourceView();
gBuffer_Normals = DeferredRender.mNormals.GetShaderResourceView();
gBuffer_Lighting = DeferredRender.mLighting.GetShaderResourceView();

Pixel Shader: (This is just a testing shader, it isn't complete)

struct POut
{
	float4 Depth    : SV_Target0;
	float4 Normals  : SV_Target1;
	float4 Lighting : SV_Target2;
};

POut PShader(VOut input)
{
	POut output;

	// Initialize
	output.Depth = float4(0,0,0,1);
	output.Normals = float4(0,1,0,1);
	output.Lighting = float4(0,1,0,1);

	// Depth
	float depth = (input.depthPosition.z / input.depthPosition.w);
	output.Depth = float4(depth, depth, depth, 1);
	if (_alphamap == 1)
	{
		output.Depth.a *= t_alphamap.Sample(ss, input.texcoord).a;
	}
	if (_diffusealpha == 1)
	{
		output.Depth.a *= t_dffalpha.Sample(ss, input.texcoord).a;
	}

	// Normals
	float3 viewSpaceNormalizedNormals = 0.5 * normalize (input.normal) + 0.5; //0.5 * normalize (input.normal) + 0.5
	output.Normals = float4(viewSpaceNormalizedNormals, 1);

	return output;
}

But as I said, only the first render target gets set, the others doesn't, and I don't get why. Am I even setting the render targets correctly?

 

THANKS!

Share this post


Link to post
Share on other sites
Juliean    7068

Also make sure to create your device with the debug flag, this will tell you if there is any problem.

				UINT createDeviceFlags = 0;
#ifdef _DEBUG
    createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG;
#endif

				if(FAILED(D3D11CreateDeviceAndSwapChain(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, createDeviceFlags, nullptr, 0, D3D11_SDK_VERSION, &description, &m_pSwapChain, &m_pDevice, &m_featureLevel, &m_pContext)))
					throw d3dException();

You could have some mismatch between your render targets, but my quess is just as good as yours, so check for debug output and PIX is the way to go.

Share this post


Link to post
Share on other sites
Migi0027    4628

When looking at where they are pointing at, all three paths appear black (empty black preview), but I know that one works, the depth mapping (RTV #1). 

 

This is confusing... dry.png

Share this post


Link to post
Share on other sites
Migi0027    4628

Just so you know, the render targets are valid, as when I clear them with, e.g. a red or a green color, it appears in the output, even in pix.

 

But it still appears as if the second and third render targets aren't being written to...

Share this post


Link to post
Share on other sites
Juliean    7068

Look at the render targets while having selected the draw call that should render to them. what shows up there? Right-click any pixel, and press "debug pixel". Also, try to output explicitely float4(1.0f, 1.0f, 1.0f, 1.0f) to the second and third render target - there might be some issue with your calculations here.

Share this post


Link to post
Share on other sites
Migi0027    4628

When debuggin the first RT, the pixel is written to correctly:

 

Pixel History
Pixel (427, 298) of Render Target 0x069F85B8, Frame 616
 

Initial framebuffer value     Alpha: 0.000 Red: 0.000 Green: 0.000 Blue: 0.000   Event 315: ID3D11DeviceContext::ClearRenderTargetView(0x0BAA5F98, 0x0017F1EC)     Alpha: 1.000 Red: 0.000 Green: 1.000 Blue: 0.000   Event 329: ID3D11DeviceContext::DrawIndexed(36, 0, 0) Primitive 1 of 12

Vertex Shader:0x0BAA5EE8
Debug Vertex 0
Debug Vertex 1
Debug Vertex 2

Geometry Shader:(none)

Pixel Shader:0x0BAA5F40
Debug Pixel (427, 298)
Pixel shader output:   Alpha: 1.000 Red: 0.988 Green: 0.988 Blue: 0.988
Final framebuffer color:   Alpha: 0.000 Red: 0.000 Green: 0.000 Blue: 0.000   Event 329: ID3D11DeviceContext::DrawIndexed(36, 0, 0) Primitive 11 of 12

Vertex Shader:0x0BAA5EE8
Debug Vertex 0
Debug Vertex 1
Debug Vertex 2

Geometry Shader:(none)

Pixel Shader:0x0BAA5F40
Debug Pixel (427, 298)
Pixel shader output:   Alpha: 1.000 Red: 0.988 Green: 0.988 Blue: 0.988
Final framebuffer color:   Alpha: 0.000 Red: 0.000 Green: 0.000 Blue: 0.000   Final framebuffer value     Alpha: 1.000 Red: 1.000 Green: 0.000 Blue: 0.000  

 

But then in the 2nd render target, nothing is there!

 

Same pixel location as in the 1st RT:

 

Pixel History
Pixel (427, 298) of Render Target 0x069F8658, Frame 616
 

Initial framebuffer value     Alpha: 0.000 Red: 0.000 Green: 0.000 Blue: 0.000   Event 317: ID3D11DeviceContext::ClearRenderTargetView(0x0BAA6048, 0x0017F1EC)     Alpha: 1.000 Red: 1.000 Green: 0.000 Blue: 0.000   Final framebuffer value     Alpha: 1.000 Red: 1.000 Green: 0.000 Blue: 0.000  

Share this post


Link to post
Share on other sites
Juliean    7068

Thats odd. On that same draw-call, look at the device, and inspect its states till you have found the render-target section (don't know which category its under, you should easily find it). Are all render targets still bound at this point in time?

Share this post


Link to post
Share on other sites
Migi0027    4628

I dont know if this is too much to ask for, but maybe one of you guys can write a small mini tutorial with all the steps to create a directx 11 app with multiple render targets, like a small step based tutorial, e.g.:

 

  • Do this...
  • ...
  • Then that

To see if I've done everything right... wacko.png

Share this post


Link to post
Share on other sites
Migi0027    4628

I do believe that all my render targets are valid, as if I change the order of the render targets, it's still the first render target that works.

 

Btw, how can you check the maximum render targets your graphics card supports?

Share this post


Link to post
Share on other sites
Migi0027    4628

The output signature of the pixel shader:

// Output signature:
//
// Name                 Index   Mask Register SysValue Format   Used
// -------------------- ----- ------ -------- -------- ------ ------
// SV_Target                0   xyzw        0   TARGET  float   xyzw
// SV_Target                1   xyzw        1   TARGET  float   xyzw
// SV_Target                2   xyzw        2   TARGET  float   xyzw

After hours in PIX, all the render targets seems to be connected correctly, but when rendering the mesh (PIX has the quite amazing feature where you can see the resources change over time!), still only the first render target gets written to!

Share this post


Link to post
Share on other sites
Migi0027    4628

This is how I initialize each single render target:

D3D11_TEXTURE2D_DESC textureDesc;
	HRESULT result;
	D3D11_RENDER_TARGET_VIEW_DESC renderTargetViewDesc;
	D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;


	// Initialize the render target texture description.
	ZeroMemory(&textureDesc, sizeof(textureDesc));

	// Setup the render target texture description.
	textureDesc.Width = textureWidth;
	textureDesc.Height = textureHeight;
	textureDesc.MipLevels = 1;
	textureDesc.ArraySize = 1;
	textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
	textureDesc.SampleDesc.Count = 1;
	textureDesc.SampleDesc.Quality = 0;
	textureDesc.Usage = D3D11_USAGE_DEFAULT;
	textureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
	textureDesc.CPUAccessFlags = 0;
	textureDesc.MiscFlags = 0;

	// Create the render target texture.
	result = device->CreateTexture2D(&textureDesc, NULL, &m_renderTargetTexture);
	if(FAILED(result))
	{
		return false;
	}

	// Setup the description of the render target view.
	renderTargetViewDesc.Format = textureDesc.Format;
	renderTargetViewDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
	renderTargetViewDesc.Texture2D.MipSlice = 0;

	// Create the render target view.
	result = device->CreateRenderTargetView(m_renderTargetTexture, &renderTargetViewDesc, &m_renderTargetView);
	if(FAILED(result))
	{
		return false;
	}

	// Setup the description of the shader resource view.
	shaderResourceViewDesc.Format = textureDesc.Format;
	shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
	shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
	shaderResourceViewDesc.Texture2D.MipLevels = 1;

	// Create the shader resource view.
	result = device->CreateShaderResourceView(m_renderTargetTexture, &shaderResourceViewDesc, &m_shaderResourceView);
	if(FAILED(result))
	{
		return false;
	}

	return true;

Share this post


Link to post
Share on other sites
Auskennfuchs    1032

Are you using the second and third Rendertarget after rendering to them as InputTexture for e.g. final combining the image? If so you have to unset the textures from your shaderresources. You can't read and write from and to the same resource at same time. In the first resourceslot you propably using another texture as input.

Share this post


Link to post
Share on other sites
Juliean    7068

Are you using the second and third Rendertarget after rendering to them as InputTexture for e.g. final combining the image? If so you have to unset the textures from your shaderresources. You can't read and write from and to the same resource at same time. In the first resourceslot you propably using another texture as input.

 

Theoretically true, DX11 will unbind any such resource itself. Try it, set a texture as render target that is still bound to the shader, you should get a warning in debug mode telling you that the shader resource view has been unbound.

 

Some more ideas:

 

- check your blend states. You eigther have to set the blend state manually for each render target, or set independant blend mode to false.

- Check your depth states and / or depth stencil view, I think there is the need to configure it for different render targets explicitely (or at least tell him to only use the setting of the first one) too.

Edited by Juliean

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By isu diss
      HRESULT FBXLoader::Open(HWND hWnd, char* Filename) { HRESULT hr = S_OK; if (FBXM) { FBXIOS = FbxIOSettings::Create(FBXM, IOSROOT); FBXM->SetIOSettings(FBXIOS); FBXI = FbxImporter::Create(FBXM, ""); if (!(FBXI->Initialize(Filename, -1, FBXIOS))) MessageBox(hWnd, (wchar_t*)FBXI->GetStatus().GetErrorString(), TEXT("ALM"), MB_OK); FBXS = FbxScene::Create(FBXM, "MCS"); if (!FBXS) MessageBox(hWnd, TEXT("Failed to create the scene"), TEXT("ALM"), MB_OK); if (!(FBXI->Import(FBXS))) MessageBox(hWnd, TEXT("Failed to import fbx file content into the scene"), TEXT("ALM"), MB_OK); if (FBXI) FBXI->Destroy(); FbxNode* MainNode = FBXS->GetRootNode(); int NumKids = MainNode->GetChildCount(); FbxNode* ChildNode = NULL; for (int i=0; i<NumKids; i++) { ChildNode = MainNode->GetChild(i); FbxNodeAttribute* NodeAttribute = ChildNode->GetNodeAttribute(); if (NodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh) { FbxMesh* Mesh = ChildNode->GetMesh(); NumVertices = Mesh->GetControlPointsCount();//number of vertices MyV = new FBXVTX[NumVertices]; for (DWORD j = 0; j < NumVertices; j++) { FbxVector4 Vertex = Mesh->GetControlPointAt(j);//Gets the control point at the specified index. MyV[j].Position = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); } NumIndices = Mesh->GetPolygonVertexCount();//number of indices; for cube 20 MyI = new DWORD[NumIndices]; MyI = (DWORD*)Mesh->GetPolygonVertices();//index array NumFaces = Mesh->GetPolygonCount(); MyF = new FBXFACEX[NumFaces]; for (int l=0;l<NumFaces;l++) { MyF[l].Vertices[0] = MyI[4*l]; MyF[l].Vertices[1] = MyI[4*l+1]; MyF[l].Vertices[2] = MyI[4*l+2]; MyF[l].Vertices[3] = MyI[4*l+3]; } UV = new XMFLOAT2[NumIndices]; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly rectangle) count { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FbxVector2 uv = uvVertices->GetAt(Mesh->GetTextureUVIndex(i, j)); UV[4*i+j] = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); } } } } } else MessageBox(hWnd, TEXT("Failed to create the FBX Manager"), TEXT("ALM"), MB_OK); return hr; } I've been trying to load fbx files(cube.fbx) into my programme. but I get this. Can someone pls help me?
       

    • By lonewolff
      Hi Guys,
      I am having a bit of a problem with a dynamic texture.
      It is creating without error and I am attempting to initialize the first pixel to white to make sure I am mapping correctly. But when I draw the texture to the quad it displays the whole quad white (instead of just one pixel).
      This is how I am creating, mapping, and setting the first pixel to white. But as mentioned, when I draw the quad, the entire quad is white.
       
      // Create dynamic texture D3D11_TEXTURE2D_DESC textureDesc = { 0 }; textureDesc.Width = 2048; textureDesc.Height = 2048; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; textureDesc.SampleDesc.Count = 1; textureDesc.Usage = D3D11_USAGE_DYNAMIC; textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; textureDesc.MiscFlags = 0; HRESULT result = d3dDevice->CreateTexture2D(&textureDesc, NULL, &textureDynamic); if (FAILED(result)) return -1; result = d3dDevice->CreateShaderResourceView(textureDynamic, 0, &textureRV); if (FAILED(result)) return -2; D3D11_MAPPED_SUBRESOURCE resource; if (FAILED(d3dContext->Map(textureDynamic, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource))) return -1; memset(resource.pData, 255, 4); d3dContext->Unmap(textureDynamic, 0);  
      Hopefully I have just made an oversight somewhere.
      Any assistance would be greatly appreciated
      (If I change the 255 value to 128 the quad then turns grey, so the mapping is definitely doing something. Just can't work out why it is colouring the whole quad and not the first pixel)
    • By KaiserJohan
      Just a really quick question - is there any overhead to using DrawIndexedInstanced even for geometry you just render once vs using DrawIndexed? Or is the details obfuscated by the graphics driver?
      I would assume no but you never know  
    • By isu diss
       I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?
       
      #define InnerRadius 6320000 #define OutterRadius 6420000 #define PI 3.141592653 #define Isteps 20 #define Ksteps 10 static float3 RayleighCoeffs = float3(6.55e-6, 1.73e-5, 2.30e-5); RWTexture2D<float4> SkyColors : register (u0); cbuffer CSCONSTANTBUF : register( b0 ) { float fHeight; float3 vSunDir; } float Density(float Height) { return exp(-Height/8340); } float RaySphereIntersection(float3 RayOrigin, float3 RayDirection, float3 SphereOrigin, float Radius) { float t1, t0; float3 L = SphereOrigin - RayOrigin; float tCA = dot(L, RayDirection); if (tCA < 0) return -1; float lenL = length(L); float D2 = (lenL*lenL) - (tCA*tCA); float Radius2 = (Radius*Radius); if (D2<=Radius2) { float tHC = sqrt(Radius2 - D2); t0 = tCA-tHC; t1 = tCA+tHC; } else return -1; return t1; } float RayleighPhaseFunction(float cosTheta) { return ((3/(16*PI))*(1+cosTheta*cosTheta)); } float OpticalDepth(float3 StartPosition, float3 EndPosition) { float3 Direction = normalize(EndPosition - StartPosition); float RayLength = RaySphereIntersection(StartPosition, Direction, float3(0, 0, 0), OutterRadius); float SampleLength = RayLength / Isteps; float3 tmpPos = StartPosition + 0.5 * SampleLength * Direction; float tmp; for (int i=0; i<Isteps; i++) { tmp += Density(length(tmpPos)-InnerRadius); tmpPos += SampleLength * Direction; } return tmp*SampleLength; } static float fExposure = -2; float3 HDR( float3 LDR) { return 1.0f - exp( fExposure * LDR ); } [numthreads(32, 32, 1)] //disptach 8, 8, 1 it's 256 by 256 image void ComputeSky(uint3 DTID : SV_DispatchThreadID) { float X = ((2 * DTID.x) / 255) - 1; float Y = 1 - ((2 * DTID.y) / 255); float r = sqrt(((X*X)+(Y*Y))); float Theta = r * (PI); float Phi = atan2(Y, X); static float3 Eye = float3(0, 10, 0); float ViewOD = 0, SunOD = 0, tmpDensity = 0; float3 Attenuation = 0, tmp = 0, Irgb = 0; //if (r<=1) { float3 ViewDir = normalize(float3(sin(Theta)*cos(Phi), cos(Theta),sin(Theta)*sin(Phi) )); float ViewRayLength = RaySphereIntersection(Eye, ViewDir, float3(0, 0, 0), OutterRadius); float SampleLength = ViewRayLength / Ksteps; //vSunDir = normalize(vSunDir); float cosTheta = dot(normalize(vSunDir), ViewDir); float3 tmpPos = Eye + 0.5 * SampleLength * ViewDir; for(int k=0; k<Ksteps; k++) { float SunRayLength = RaySphereIntersection(tmpPos, vSunDir, float3(0, 0, 0), OutterRadius); float3 TopAtmosphere = tmpPos + SunRayLength*vSunDir; ViewOD = OpticalDepth(Eye, tmpPos); SunOD = OpticalDepth(tmpPos, TopAtmosphere); tmpDensity = Density(length(tmpPos)-InnerRadius); Attenuation = exp(-RayleighCoeffs*(ViewOD+SunOD)); tmp += tmpDensity*Attenuation; tmpPos += SampleLength * ViewDir; } Irgb = RayleighCoeffs*RayleighPhaseFunction(cosTheta)*tmp*SampleLength; SkyColors[DTID.xy] = float4(Irgb, 1); } }  
    • By Endurion
      I have a gaming framework with an renderer interface. Those support DX8, DX9 and latest, DX11. Both DX8 and DX9 use fixed function pipeline, while DX11 obviously uses shaders. I've got most of the parts working fine, as in I can switch renderers and notice almost no difference. The most advanced features are 2 directional lights with a single texture  
      My last problem is lighting; albeit there's documentation on the D3D lighting model I still can't get the behaviour right. My mistake shows most prominently in the dark side opposite the lights. I'm pretty sure the ambient calculation is off, but that one's supposed to be the most simple one and should be hard to get wrong.
      Interestingly I've been searching high and low, and have yet to find a resource that shows how to build a HLSL shader where diffuse, ambient and specular are used together with material properties. I've got various shaders for all the variations I'm supporting. I stepped through the shader with the graphics debugger, but the calculation seems to do what I want. I'm just not sure the formula is correct.
      This one should suffice though, it's doing two directional lights, texture modulated with vertex color and a normal. Maybe someone can spot one (or more mistakes). And yes, this is in the vertex shader and I'm aware lighting will be as "bad" as in fixed function; that's my goal currently.
      // A constant buffer that stores the three basic column-major matrices for composing geometry. cbuffer ModelViewProjectionConstantBuffer : register(b0) { matrix model; matrix view; matrix projection; matrix ortho2d; }; struct DirectionLight { float3 Direction; float PaddingL1; float4 Ambient; float4 Diffuse; float4 Specular; }; cbuffer LightsConstantBuffer : register( b1 ) { float4 Ambient; float3 EyePos; float PaddingLC1; DirectionLight Light[8]; }; struct Material { float4 MaterialEmissive; float4 MaterialAmbient; float4 MaterialDiffuse; float4 MaterialSpecular; float MaterialSpecularPower; float3 MaterialPadding; }; cbuffer MaterialConstantBuffer : register( b2 ) { Material _Material; }; // Per-vertex data used as input to the vertex shader. struct VertexShaderInput { float3 pos : POSITION; float3 normal : NORMAL; float4 color : COLOR0; float2 tex : TEXCOORD0; }; // Per-pixel color data passed through the pixel shader. struct PixelShaderInput { float4 pos : SV_POSITION; float2 tex : TEXCOORD0; float4 color : COLOR0; }; // Simple shader to do vertex processing on the GPU. PixelShaderInput main(VertexShaderInput input) { PixelShaderInput output; float4 pos = float4( input.pos, 1.0f ); // Transform the vertex position into projected space. pos = mul(pos, model); pos = mul(pos, view); pos = mul(pos, projection); output.pos = pos; // pass texture coords output.tex = input.tex; // Calculate the normal vector against the world matrix only. //set required lighting vectors for interpolation float3 normal = mul( input.normal, ( float3x3 )model ); normal = normalize( normal ); float4 ambientEffect = Ambient; float4 diffuseEffect = float4( 0, 0, 0, 0 ); float4 specularEffect = float4( 0, 0, 0, 0 ); for ( int i = 0; i < 2; ++i ) { // Invert the light direction for calculations. float3 lightDir = -Light[i].Direction; float lightFactor = max( dot( lightDir, input.normal ), 0 ); ambientEffect += Light[i].Ambient * _Material.MaterialAmbient; diffuseEffect += saturate( Light[i].Diffuse * dot( normal, lightDir ) );// * _Material.MaterialDiffuse; //specularEffect += Light[i].Specular * dot( normal, halfangletolight ) * _Material.MaterialSpecularPower; } specularEffect *= _Material.MaterialSpecular; //ambientEffect.w = 1.0; ambientEffect = normalize( ambientEffect ); /* Ambient effect: (L1.ambient + L2.ambient) * object ambient color Diffuse effect: (L1.diffuse * Dot(VertexNormal, Light1.Direction) + L2.diffuse * Dot(VertexNormal, Light2.Direction)) * object diffuse color Specular effect: (L1.specular * Dot(VertexNormal, HalfAngleToLight1) * Object specular reflection power + L2.specular * Dot(VertexNormal, HalfAngleToLight2) * Object specular reflection power ) * object specular color Resulting color = Ambient effect + diffuse effect + specular effect*/ float4 totalFactor = ambientEffect + diffuseEffect + specularEffect; totalFactor.w = 1.0; output.color = input.color * totalFactor; return output; }   Edit: This message editor is driving me nuts (Arrrr!) - I don't write code in Word.
  • Popular Now