Hi all, I figured this would be the best place to ask this...
I am working on a CADViewer that is getting models that are out of my control. The CADViewer is a replacement of the 3DViewer in an existing WPF application.
WPF is doing a load of magic when it comes to Normals/Vertices/Lighting which means that the models that I am getting are not correct in any way. Faces (triangles) that are on the same side of an object and are facing the same way are having different vertex-orientations. Matrices that I am receiving can have negative scale-values, meaning that vertex-order is once again flipped around. Those are just a few issues.
Currently the way I am tackling this issue to get acceptable results is the following:
- Have the cullmode set to none.
- Translate the light direction with the viewMatrix and then send them to the GPU.
- In the vertex shader translate the vertices' position once with just the worldView matrix and another time with the worldViewProjection matrix.
- In the geometry shader calculate the per-face normal using the position translated with the worldView matrix, set this normal to the 3 vertices that make up the face.
- In the pixel shader (ab)use the SV_IsFrontFace variable to determine if the vertex normal (which really is the face normal) should be negative. Then then multiply the inputcolor with the lightcolor, then multiply that with the dotproduct of the vertex normal and the lightdirection.
- In the pixel shader add a float4(0.1,0.1,0.1,1) to the output color to compensate for the brightness-loss that shouldn't happen.
The acceptable part of the results lean on the fact that the lightdirection that I give the GPU has influence, but not the expected influence.
My question is the following:
Is there a better way to tackle this issue? Like not calculating normals in the Geometry shader. Or not using the SV_IsFrontFace in the pixel shader. Perhaps a way that I don't have to translate the light's normal with the ViewMatrix?
Full shader code as a reference:
struct VS_IN
{
float4 pos : POSITION;
float3 norm : NORMAL;
matrix instance : INSTANCEMATRIX;
float4 color : INSTANCECOLOR;
};
struct PS_IN
{
float4 pos : SV_POSITION;
float4 viewPos : TEXCOORD0;
float3 norm : NORMAL;
float4 color: COLOR;
};
cbuffer viewProj : register (b0)
{
matrix viewProj;
}
cbuffer view : register (b1)
{
matrix view;
}
cbuffer lights : register (b0)
{
float4 Light1Color;
float3 Light1Direction;
}
PS_IN VS(VS_IN input)
{
PS_IN output = (PS_IN)0;
output.pos = mul(mul(input.pos, input.instance), viewProj);
output.viewPos = mul(mul(input.pos, input.instance), view);
output.color = input.color;
return output;
};
[maxvertexcount(3)]
void FlipFaceGS(triangle PS_IN input[3], inout TriangleStream<PS_IN> OutputStream)
{
PS_IN v1 = input[0];
PS_IN v2 = input[1];
PS_IN v3 = input[2];
float3 faceEdgeA = v2.viewPos - v1.viewPos;
float3 faceEdgeB = v3.viewPos - v1.viewPos;
float3 faceNormal = normalize(cross(faceEdgeA, faceEdgeB));
v1.norm = faceNormal;
v2.norm = faceNormal;
v3.norm = faceNormal;
OutputStream.Append(v1);
OutputStream.Append(v2);
OutputStream.Append(v3);
OutputStream.RestartStrip();
}
float4 PS(PS_IN input, bool front : SV_IsFrontFace) : SV_Target
{
if (front)
{
float4 newColor = (input.color * Light1Color * saturate(dot(input.norm, Light1Direction)));
newColor += float4(0.1, 0.1, 0.1, 1);
newColor.a = input.color.a;//we don't want to affect the alpha level with lights
return newColor;
}
else {
float4 newColor = (input.color * Light1Color * saturate(dot(-input.norm, Light1Direction)));
newColor += float4(0.1, 0.1, 0.1, 1);
newColor.a = input.color.a;//we don't want to affect the alpha level with lights*/
return newColor;
}
};
technique10 Render
{
pass P0
{
SetVertexShader(CompileShader(vs_4_0, VS()));
SetGeometryShader(CompileShader(gs_4_0, FlipFaceGS()));
SetPixelShader(CompileShader(ps_4_0, PS()));
}
}
WPF seems to be able to do it, but how?