pre-pass lightning: gbuffer clears fine, but objects normals and depths wont render

Started by
2 comments, last by Tsus 11 years, 7 months ago
hello,

so in the process of learning direct3d11 i decided to implement a small framework where is easy to change render type (ppl, deferred, forward, etc) and now im using that framework to implement a a pre-pass lightning.

im still in the process of rendering normals and depths into the gbuffer. problem is gbuffer clears fine (grey on the normal buffer, white on the depth buffer) BUT when i try to render 4 spheres (just for testing) onto the gbuffer, nothing changes it just keeps grey for normals and white on the depth. at least thats what it shows on PIX (im not rendering to the window just yet, just using PIX to see if it works).

problem is it could be wrong transformations, wrong shaders, wrong constant buffers.. i've already lost 3 days with this and its becoming really frustrating especially because i thought that i was starting to understand how direct3d works..

ill try to post every piece of relevant code

my clear gbuffer shader:

[source lang="cpp"]
// simple vertex and pixel shader that clears the NormalDepth buffer to default values

////////// Vertex Shader //////////

struct VSInput
{
float4 Position : SV_POSITION0;
};

struct VSOutput
{
float4 Position : SV_POSITION0;
};

VSOutput VS(VSInput input)
{
VSOutput output = (VSOutput)0;
output.Position = input.Position;
return output;
}

////////// Pixel Shader //////////

struct PSOutput
{
float4 Normal : SV_TARGET0;
float4 Depth : SV_TARGET1;
};

PSOutput PS(VSOutput input)
{
PSOutput output = (PSOutput)0;

// set normals (for the normals buffer) to 0.5f (transforming 0.5f into [-1, 1] is 0.0f, a good default value
output.Normal = float4(0.5f, 0.5f, 0.5f, 1.0f);

// set depth (for the depth buffer) to white (maximum depth)
output.Depth = 1.0f;

return output;
}
[/source]

my render objects normals and depths into gbuffer shader:
[source lang="cpp"]
// simple vertex and pixel shader that fills the NormalDepth buffer with the transformed geometry

////////// Vertex Shader //////////

cbuffer Parameters : register(b0)
{
matrix WorldMatrix;
matrix ViewMatrix;
matrix ProjectionMatrix;
};

struct VSInput
{
float4 Position : SV_POSITION0;
float4 Normal : NORMAL0;
};

struct VSOutput
{
float4 Position : SV_POSITION0;
float3 Normal : TEXCOORD0;
float2 Depth : TEXCOORD1;
};

VSOutput VS(VSInput input)
{
VSOutput output = (VSOutput)0;

// apply position transformations (into screen space)
output.Position = mul(input.Position, WorldMatrix);
output.Position = mul(output.Position, ViewMatrix);
output.Position = mul(output.Position, ProjectionMatrix);

// apply normal transformations (into world space)
output.Normal = mul(WorldMatrix, input.Normal);

// pass on the depth position
output.Depth.x = input.Position.z;
output.Depth.y = input.Position.w;

return output;
}

////////// Pixel Shader //////////

struct PSOutput
{
float4 Normal : SV_TARGET0;
float4 Depth : SV_TARGET1;
};

PSOutput PS(VSOutput input)
{
PSOutput output = (PSOutput)0;

// remap the normal from [-1, 1] to [0, -1]
output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f); // output.Normal = ((input.Normal + 1.0f) / 2.0f);

// just pass the depth values
output.Depth = input.Depth.x / input.Depth.y;

return output;
}
[/source]

calculation of the view matrix:
[source lang="cpp"]DirectX::XMMATRIX D3DCamera::GetViewMatrix()
{
// create position vector
auto positionVector = DirectX::XMVectorSet(positionX, positionY, positionZ, 0.0f);
// default lookAt is through the Z coord
auto lookAtVector = DirectX::XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f);
// default up is through the Y coord
auto upVector = DirectX::XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);

// create the rotation matrix, roll = Z, pitch = X, yaw = Y
auto rotationMatrix = DirectX::XMMatrixRotationRollPitchYaw(rotationX, rotationY, rotationZ);

// transform the lookAt and up vector using the rotation matrix
lookAtVector = DirectX::XMVector3TransformCoord(lookAtVector, rotationMatrix);
upVector = DirectX::XMVector3TransformCoord(upVector, rotationMatrix);

// update lookAt vector from the new position
lookAtVector = DirectX::XMVectorAdd(positionVector, lookAtVector);

// finally create the view matrix using the updated vectors
return DirectX::XMMatrixLookAtLH(positionVector, lookAtVector, upVector);
}[/source]

calculation of the world matrix for each object:
[source lang="cpp"]virtual DirectX::XMMATRIX GetWorldMatrix()
{
// TODO: rotate around a point, now it only rotates around itself
auto scaling = DirectX::XMMatrixScaling(scaleX, scaleY, scaleZ);
auto rotation = DirectX::XMMatrixRotationRollPitchYaw(rotationX, rotationY, rotationZ);
auto translation = DirectX::XMMatrixTranslation(positionX, positionY, positionZ);

auto scale_rotation = DirectX::XMMatrixMultiply(DirectX::XMMatrixScaling(scaleX, scaleY, scaleZ),
DirectX::XMMatrixRotationRollPitchYaw(rotationX, rotationY, rotationZ));
return DirectX::XMMatrixMultiply(scale_rotation, translation);
}[/source]

now setting the shader parameters and calling draws:
[source lang="cpp"]void D3DMaterialFillNormalDepth::SetParameters(const DirectX::XMMATRIX& worldMatrix, const DirectX::XMMATRIX& viewMatrix,
const DirectX::XMMATRIX& projectionMatrix)
{
cbParameters.WorldMatrix = worldMatrix;
cbParameters.ViewMatrix = viewMatrix;
cbParameters.ProjectionMatrix = projectionMatrix;
d3d->DeviceContext()->UpdateSubresource(cbBuffer, 0, nullptr, &cbParameters, 0, 0);
}

// then for each object:
d3d->DeviceContext()->OMSetRenderTargets(renderTargetViews.size(), &renderTargetViews[0], d3d->DepthStencilView());

d3d->DeviceContext()->IASetInputLayout(pInputLayout);
d3d->DeviceContext()->VSSetShader(pVertexShader, nullptr, 0);
d3d->DeviceContext()->PSSetShader(pPixelShader, nullptr, 0);
d3d->DeviceContext()->VSSetConstantBuffers(0, 1, &cbBuffer);
d3d->DeviceContext()->PSSetConstantBuffers(0, 1, &cbBuffer);

// activate this geometry on the device context
unsigned int stride = sizeof(VertexType);
unsigned int offset = 0;

d3d->DeviceContext()->IASetVertexBuffers(0, 1, &pVertexBuffer, &stride, &offset);
d3d->DeviceContext()->IASetIndexBuffer(pIndexBuffer, DXGI_FORMAT_R16_UINT, 0);
d3d->DeviceContext()->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3d->DeviceContext()->DrawIndexed(indices.size(), 0, 0);
[/source]
PIX does say it draws around 1200 indices for each sphere, so i assume things are kind of working, but the result buffer is just grey for normals and white for depth like it is after clearing it.

sorry the wall of code, trying to post every piece that i think is relevant, feel free to ask for more if needed.
and thanks in advance.
yet, another stupid signature..
Advertisement
Hi a2ps!

Can you directly render the scene to the backbuffer? Perhaps try some simple vertex/pixel shader, e.g. just drawing everything in red. That way you could isolate, whether it is your input assembler setup or the output merger (render target stuff).

I noticed two things that might be a problem. It is hard to tell, whether those apply in your case, since it depends on the way you fill your vertex buffer / compile your shader, but here it goes:

Be careful with the transformation of the normal to world space. Your vertex shader receives the normal in a float4. Make sure that the w-component is a zero, since you only want to apply the rotation/scale part of the world matrix, not the translation. (This is a common pitfall.)

Another thing is the packing order of your matrices. Sometimes the C++ and HLSL side don’t line up. Perhaps try to specify the order explicitly (row_major or column_major), like so:
row_major matrix WorldMatrix;
row_major matrix ViewMatrix;
row_major matrix ProjectionMatrix;

(You can also set this with a compiler flag: \Zpr for row major and \Zpc for column major.)

Let us know if that works for you.
Best regards!
thanks for the help Tsus.

so yes, the way im setting my mesh, the normals already have the w component at 0.0f. so i took your advice and rendered the gbuffer into the main rendertarget (the window) so i could check the results. the reason i didnt do that before was because i thought that PIX showed the "graphics pixel history" feature for all buffers but i was wrong, the moment i rendered it to the main render target i could immediatelly see that the draw call gets called but it didnt show so the transformations should be wrong.

i hard coded some matrices to see it worked and at first it didnt, but after i applied a transpose to it, it worked!

the only problem now is that for the 4 spheres that i draw, it leaves the first 3 at the origin (like it didnt apply any transformation) and only moves the 4th sphere, probbaly some bug in my code that ill have to find.

sorry i havent replied earlier but my work doesnt leave me much time for my personal projects (javaEE for a living is NOT fun..).

PS: as soon as i finished writting this post, i managed to solve the "only one sphere move" problem, it was just a bug due to ctrl+c ctrl+v of code, now everything works fine! now off to build my lightning buffer and pass.

once again thanks so much for the help smile.png
yet, another stupid signature..
Hi a2ps!

Nice to hear that it works! smile.png
The missing transpose probably comes from the packing order of the matrices. I assume, on the C++ side it is row_major and on the HLSL side it is column_major. A row_major in front of your matrices in the HLSL code should help you avoiding the transpose in the C++ code. Though, I would rather recommend using the compiler flag \Zpr when compiling the shaders with fxc.exe to make row_major the default packing order in HLSL. Always writing the transpose on the C++ side or adding manually the row_major in front of all matrices in the HLSL code is a little error-prone.

Happy coding with the lighting! smile.png
Best regards

This topic is closed to new replies.

Advertisement