The version of SSAO that I am implementing requires view space position in a texture to be sampled in the following ambient occlusion calculation pass. I thought that I could simply add an additional pass similar to the shadow pass (which just writes to the depth texture), the difference being that I would write the view space positions to the renderTargetView. I think I might have erred in my thinking that I can just output whatever I want from the pixel shader and it will be written to the bound renderTargetView. Was I correct in that? I just disregard the SV_POSITION which is compulsory and output the viewSpace position which I calculated. However when sampled, the resulting texture is empty.
I set up the Geometry renderTargetView with
viewPositionDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
viewPositionDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
(the rest should be standard). I chose DXGI_FORMAT_R32G32B32A32_FLOAT so that the values read from the texture are not normalized and can be directly worked with. I also created a depth buffer so that only the positions of the relevant vertices are written.
The VS looks like this (constant buffers omitted):
struct VertexShaderInput
{
float3 pos : POSITION;
};
struct VertexShaderOutput
{
float4 pos : SV_POSITION;
float3 viewPos : POSITION;
};
VertexShaderOutput main(VertexShaderInput input)
{
VertexShaderOutput output;
output.pos = float4(0, 0, 0, 0); // I do not care about the SV_POSITION
float4 viewPos = float4(input.pos, 1.0f);
viewPos = mul(model, viewPos);
viewPos = mul(view, viewPos);
output.viewPos = viewPos.xyz;
return output;
}
Geometry PS:
struct PixelShaderInput
{
float4 pos : SV_POSITION;
float3 viewPos : POSITION;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
return float4(input.viewPos, 1);
}
Thanks for your help.