Sign in to follow this  
rouncer

DX11 pixel shader multiple render targets dx11

Recommended Posts

rouncer    294
how do you define a pixel shader so it outputs to more than one render target, im trying this->

struct PS_P_OUTPUT
{
float4 col0 : SV_Target0;
float4 col1 : SV_Target1;
};


but its not working, what have i got wrong?

Share this post


Link to post
Share on other sites
Aqua Costa    3691
That is correct... So you're either not correctly binding the render targets or something is wrong in your pixel shader...

-How are you binding multiple render targets?

-Can you post your pixel shader?

Share this post


Link to post
Share on other sites
rouncer    294
shader->

struct V2S_INPUT
{
float4 pc : TEXCOORD0;
// float2 c : TEXCOORD1;
};

struct P2S_INPUT
{
float4 pos : SV_POSITION;
float3 nor : TEXCOORD0;
float3 wpos : TEXCOORD1;
};

struct G2S_INPUT
{
float4 pos : POSITION;
float3 nor : TEXCOORD0;
};

struct PS_P_OUTPUT
{
float4 col0 : SV_Target0;
float4 col1 : SV_Target1;
};


G2S_INPUT VS_P(V2S_INPUT Input)
{
G2S_INPUT Output;

Output.pos = float4(float3(Input.pc.x*255,Input.pc.y*255,Input.pc.z*255)*voxel_size+chunk_pos,1);
Output.nor= float3(Input.pc.w,Input.pc.w,Input.pc.w);

return Output;
}



[maxvertexcount(4)]
void GS_P(point G2S_INPUT In[1], inout TriangleStream<P2S_INPUT> TriStream)
{
P2S_INPUT Out;

Out.nor.xyz = In[0].nor.xyz;


float3 up=float3(view._12,view._22,view._32);
float3 right=float3(view._11,view._21,view._31);

float3 ppos;

ppos=In[0].pos.xyz+up*voxel_size*1.25f;
Out.pos=mul(float4(ppos,1),wvp);
Out.wpos=In[0].pos.xyz;
TriStream.Append(Out);
ppos=In[0].pos.xyz+right*voxel_size*1.25f+up*voxel_size*1.25f;
Out.pos=mul(float4(ppos,1),wvp);
Out.wpos=In[0].pos.xyz;
TriStream.Append(Out);
ppos=In[0].pos.xyz;
Out.pos=mul(float4(ppos,1),wvp);
Out.wpos=In[0].pos.xyz;
TriStream.Append(Out);
ppos=In[0].pos.xyz+right*voxel_size*1.25f;
Out.pos=mul(float4(ppos,1),wvp);
Out.wpos=In[0].pos.xyz;
TriStream.Append(Out);
TriStream.RestartStrip();
}

PS_P_OUTPUT PS_P(P2S_INPUT Input)
{
PS_P_OUTPUT pspo;

pspo.col0=float4(Input.nor,1);
pspo.col1=float4(Input.wpos,1);

return pspo;
}



binding render targets->

//set multiple render targets
RTV mrt[2];

mrt[0]=rtv;
mrt[1]=ws_r;


dc->OMSetRenderTargets(2,mrt,dsv);
dc->ClearRenderTargetView(rtv, ClearColor);
dc->ClearRenderTargetView(ws_r, ClearColor);
dc->ClearDepthStencilView(dsv, D3D11_CLEAR_DEPTH, 1.0f, 0);




its actually rendering the first render target, but not the second... if it switch them around it draws the opposite colour, but for some reason it wont draw the second target...

Share this post


Link to post
Share on other sites
Aqua Costa    3691
The code you posted looks correct... How are you creating the render targets?

Also, can you use the [.source] or [.code] tags (without the dot)? So your source looks like this:
[source]
PS_P_OUTPUT PS_P(P2S_INPUT Input)
{
PS_P_OUTPUT pspo;

pspo.col0=float4(Input.nor,1);
pspo.col1=float4(Input.wpos,1);

return pspo;
}
[/source]

P.S: Why are you trying to store the world position?

Share this post


Link to post
Share on other sites
rouncer    294
i need worldspace cause im writing a brush that paints 3d points, so i need the world position to place the brush sphere on the surface.
note i could just render the whole thing again, but it would be very computational as its a point cloud (rendered as lots of billboards, hence the gs)...

heres the second render target
i wrote this myself.
scw and sch are the screen dimensions, the same as the main render target.

[source]
if(1)
{
int size_x=scw;
int size_y=sch;
ID3D11Texture2D * pTexture2D = NULL;

D3D11_TEXTURE2D_DESC desc;
memset(&desc,0,sizeof(D3D11_TEXTURE2D_DESC));
desc.Width = (UINT)size_x;
desc.Height = (UINT)size_y;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE|D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = NULL;//D3D11_CPU_ACCESS_WRITE;

D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;

srvDesc.Format = desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MipLevels = desc.MipLevels;

D3D11_RENDER_TARGET_VIEW_DESC rtvdesc;
memset(&rtvdesc,0,sizeof(D3D11_RENDER_TARGET_VIEW_DESC));
rtvdesc.Format=desc.Format;
rtvdesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;



// Create the shader resource view.
dev->CreateTexture2D(&desc, NULL, &pTexture2D);
dev->CreateShaderResourceView( pTexture2D, &srvDesc, &ws);
dev->CreateRenderTargetView(pTexture2D, &rtvdesc, &ws_r);
}

[/source]



heres the first.
its different because i created this when i created the device (copy paste from sample)
you get it from the swap chain (sc)

[source]

// Create a render target view
TEX pBackBuffer = NULL;
hr = sc->GetBuffer(0, __uuidof( ID3D11Texture2D ), ( LPVOID* )&pBackBuffer);

hr = dev->CreateRenderTargetView(pBackBuffer, NULL, &rtv);
pBackBuffer->Release();

[/source]

Share this post


Link to post
Share on other sites
rouncer    294
so as you can see, i am without a doubt confused... i actually did it mostly right... there must be something small stopping it from working.

note, this is my first time ever using mrt, so its understandable i stuffed it up in a small way.

thanks for the help tho, TiagoCosta.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By gsc
      Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
      But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
      float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = positionVS.xyz + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, offset.xy).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result: http://imgur.com/UX2X1fc
      I will really appreciate for any advice!
    • By isu diss
       I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?
       
      #define InnerRadius 6320000 #define OutterRadius 6420000 #define PI 3.141592653 #define Isteps 20 #define Ksteps 10 static float3 RayleighCoeffs = float3(6.55e-6, 1.73e-5, 2.30e-5); RWTexture2D<float4> SkyColors : register (u0); cbuffer CSCONSTANTBUF : register( b0 ) { float fHeight; float3 vSunDir; } float Density(float Height) { return exp(-Height/8340); } float RaySphereIntersection(float3 RayOrigin, float3 RayDirection, float3 SphereOrigin, float Radius) { float t1, t0; float3 L = SphereOrigin - RayOrigin; float tCA = dot(L, RayDirection); if (tCA < 0) return -1; float lenL = length(L); float D2 = (lenL*lenL) - (tCA*tCA); float Radius2 = (Radius*Radius); if (D2<=Radius2) { float tHC = sqrt(Radius2 - D2); t0 = tCA-tHC; t1 = tCA+tHC; } else return -1; return t1; } float RayleighPhaseFunction(float cosTheta) { return ((3/(16*PI))*(1+cosTheta*cosTheta)); } float OpticalDepth(float3 StartPosition, float3 EndPosition) { float3 Direction = normalize(EndPosition - StartPosition); float RayLength = RaySphereIntersection(StartPosition, Direction, float3(0, 0, 0), OutterRadius); float SampleLength = RayLength / Isteps; float3 tmpPos = StartPosition + 0.5 * SampleLength * Direction; float tmp; for (int i=0; i<Isteps; i++) { tmp += Density(length(tmpPos)-InnerRadius); tmpPos += SampleLength * Direction; } return tmp*SampleLength; } static float fExposure = -2; float3 HDR( float3 LDR) { return 1.0f - exp( fExposure * LDR ); } [numthreads(32, 32, 1)] //disptach 8, 8, 1 it's 256 by 256 image void ComputeSky(uint3 DTID : SV_DispatchThreadID) { float X = ((2 * DTID.x) / 255) - 1; float Y = 1 - ((2 * DTID.y) / 255); float r = sqrt(((X*X)+(Y*Y))); float Theta = r * (PI); float Phi = atan2(Y, X); static float3 Eye = float3(0, 10, 0); float ViewOD = 0, SunOD = 0, tmpDensity = 0; float3 Attenuation = 0, tmp = 0, Irgb = 0; //if (r<=1) { float3 ViewDir = normalize(float3(sin(Theta)*cos(Phi), cos(Theta),sin(Theta)*sin(Phi) )); float ViewRayLength = RaySphereIntersection(Eye, ViewDir, float3(0, 0, 0), OutterRadius); float SampleLength = ViewRayLength / Ksteps; //vSunDir = normalize(vSunDir); float cosTheta = dot(normalize(vSunDir), ViewDir); float3 tmpPos = Eye + 0.5 * SampleLength * ViewDir; for(int k=0; k<Ksteps; k++) { float SunRayLength = RaySphereIntersection(tmpPos, vSunDir, float3(0, 0, 0), OutterRadius); float3 TopAtmosphere = tmpPos + SunRayLength*vSunDir; ViewOD = OpticalDepth(Eye, tmpPos); SunOD = OpticalDepth(tmpPos, TopAtmosphere); tmpDensity = Density(length(tmpPos)-InnerRadius); Attenuation = exp(-RayleighCoeffs*(ViewOD+SunOD)); tmp += tmpDensity*Attenuation; tmpPos += SampleLength * ViewDir; } Irgb = RayleighCoeffs*RayleighPhaseFunction(cosTheta)*tmp*SampleLength; SkyColors[DTID.xy] = float4(Irgb, 1); } }  
    • By amadeus12
      I made my obj parser
      and It also calculate tagent space for normalmap.
      it seems calculation is wrong..
      any good suggestion for this?
      I can't upload my pics so I link my question.
      https://gamedev.stackexchange.com/questions/147199/how-to-debug-calculating-tangent-space
      and I uploaded my code here


      ObjLoader.cpp
      ObjLoader.h
    • By Alessandro Pozzer
      Hi guys, 

      I dont know if this is the right section, but I did not know where to post this. 
      I am implementing a day night cycle on my game engine and I was wondering if there was a nice way to interpolate properly between warm colors, such as orange (sunset) and dark blue (night) color. I am using HSL format.
      Thank  you.
    • By thefoxbard
      I am aiming to learn Windows Forms with the purpose of creating some game-related tools, but since I know absolutely nothing about Windows Forms yet, I wonder:
      Is it possible to render a Direct3D 11 viewport inside a Windows Form Application? I see a lot of game editors that have a region of the window reserved for displaying and manipulating a 3D or 2D scene. That's what I am aiming for.
      Otherwise, would you suggest another library to create a GUI for game-related tools?
       
      EDIT:
      I've found a tutorial here in gamedev that shows a solution:
      Though it's for D3D9, I'm not sure if it would work for D3D11?
       
  • Popular Now