# two sided lighting - which algo?

This topic is 653 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

so i've come across two different algos for two sided lighting:

1.  turn cull on, set winding CCW, set shader param invert_normals to false, then draw, then set winding to CW, set invert_normals to true, and draw again.  this seems to be a popular algo.  an example can be found here:

http://www.fairyengine.com/articles/hlsl2sided.htm

2. turn cull off, pass camera direction vector as a parameter to the shader. in the shader, if view_dir dot normal > 0, the normal is backfacing, so invert it before doing lighting calculations.  I have only seen this mentioned one place - right here, by our very own Buckeye:

https://www.gamedev.net/topic/482343-draw-2-faces-vs-cull_none-speed/?hl=%2Btwo+%2Bsided+%2Blighting#entry4156668

he started with a single quad with two normals per vertex facing opposite directions, then realized he could do the view_dir dot normal test. in the thread he says its faster than algo #1 in testing. makes sense - just one draw call vs two, no culling vs cull testing everything.twice, and the same number of tris rasterized either way.

I haven't looked up view_dir dot normal yet - is it correct?  I assume so, but want to be certain.  sure wish i remembered some of the linear i took. got an A and can't remember jack!

and it is the camera's "look at" direction vector that he's referring to as view direction, right?  that doesn't have to be normalized, does it? don't think so...

any reason not to go with algo #2 ?

anyone heard of algo #2 before, or did Buckeye invent it?

Edited by Norman Barrows

##### Share on other sites

If you're using Shader Model 3, it exposes a hardware register that tells you whether you're running a pixel shader for the CW or CCW side of a face, via the VFACE semantic. You can use this to conditionally flip the normal or not, e.g.

float4 ps_main( Interpolator i, float vface : VFACE ) : COLOR0
{
float3 normal = vface < 0 ? -i.normal : i.normal;
}

and it is the camera's "look at" direction vector that he's referring to as view direction, right?  that doesn't have to be normalized, does it? don't think so...

If you use the per-pixel direction from-pixel-to-camera, then it works, which is (camera_position - pixel_position) or that negated, depending on convention.
If you just use the camera's "forward vector" then it doesn't work.

##### Share on other sites

#2 (with the register mentioned by Hodgman, not a camera vector) is the industry-standard way.  A few games that use it:

Star Ocean 4
Star Ocean 5
Phantasy Star NOVA
Silent Scope: Bone Eater
Final Fantasy XV
Many more…

L. Spiro

Thanks all!

##### Share on other sites
Interpolator i,

I'm not familiar with this syntax. HLSL 4 interpolation?

Right now i'm doing lighting in the vertex shader, same as the Basic HLSL sample code:

float3 vTotalLightDiffuse = float3(0,0,0);    // Compute simple directional lighting equation
for(int i=0; i<nNumLights; i++ )
vTotalLightDiffuse += g_LightDiffuse[i] * max(0,dot(vNormalWorldSpace, g_LightDir[i]));
Output.Diffuse.rgb = g_MaterialDiffuseColor * vTotalLightDiffuse + g_MaterialAmbientColor * g_LightAmbient;
Output.Diffuse.a = 1.0f;



I'm only using a single directional light (IE the sun or the moon).

Since VFACE only works for pixel shaders, not vertex shaders, it looks like i just need to pass the normal to the pixel shader, move the lighting equations to the pixel shader, and add the flip normal based on VFACE.

But then where does camera angle come in?

These will be plant models made of transparent quads.  I'm not re-mapping normals based on object center yet, so all vertex normals and any interpolated normal at any pixel should all be perpendicular to their triangle's face.  but as you  rasterize across a tri, the angle to the camera changes - giving you your gourard type gradient shading.. without camera angle, i would think that you'd get lit flat shading of faces.

OK, i'm getting confused here. been too long since i wrote a pipeline. I only needed camera angle to determine backface.  VFACE does that now, so i don't need camera angle.  but is it true about what i said about the normals for a tri all being the same, and thus you get flat shading per face?

Wait... its because its a directional light, not a point light. So the lightdir is uniform everywhere in the scene.  with a point light, you'd get a change in lightdir from one vertex to the next, and thus a change in lighting from one vertex to the next, and thus a change in interpolated lighting from one pixel to the next. right?

Also, should i be concerned about a performance hit moving the lighting equations to the pixel shader?    doing lighting once per pixel instead of just once per vertex?

Edited by Norman Barrows

##### Share on other sites

If you use the per-pixel direction from-pixel-to-camera, then it works, which is (camera_position - pixel_position) or that negated, depending on convention. If you just use the camera's "forward vector" then it doesn't work.

works as in: it can determine on a pixel per pixel basis if the normal is backwards?

##### Share on other sites

I'm not familiar with this syntax. HLSL 4 interpolation?

Looks like a custom defined struct:

struct Interpolator
{
float3 vNormal : NORMAL;
float3 vTangent : TEXCOORD0;

// ...
}


But then where does camera angle come in?

Just like in the vertex-shader? g_LightDir should be available there as well, if not you have to make it available by setting it as a PS constant (don't remember the DX9 syntax that well anymore). From there on your calculation is just the same, you pass the normal from the vertex-shader to pixel-shader, and pretty much copy-paste your lighting code to the pixel-shader.

##### Share on other sites
Looks like a custom defined struct:

yeah, that i.normal kind of gives it away. <g>.   but i've never seen that type of shader code.  whats the theory behind it?  per-pixel shading or something?

Just like in the vertex-shader? g_LightDir should be available there as well

Edited by Norman Barrows

##### Share on other sites
yeah, that i.normal kind of gives it away. . but i've never seen that type of shader code. whats the theory behind it? per-pixel shading or something?

Pretty much any time you have to return multiple values from a shader stage, you use a struct (I think there's some sort of out-syntax for function attributes, but not sure).

So this means per-pixel shading, multiple render-targets, etc... all uses such a struct and return it:

struct GBuffer
{
float4 albedo : COLOR0;
float4 normals : COLOR1;
float4 specular : COLOR2;
}

GBuffer mainPS(VertexOutput o)
{
GBuffer gbuffer;

return gbuffer;
}

Besides, it makes sharding state input/output easier (as you see I can reuse the VS-output as PS-input, so if I have to change something it automatically applies to all sources). Other than that, for generated shader code those structures are pretty worthwhile as well, as they make it much easier to generate stage-IO then if you had to use single attributes.

Edited by Juliean

1. 1
2. 2
Rutin
21
3. 3
4. 4
A4L
15
5. 5

• 13
• 26
• 10
• 11
• 9
• ### Forum Statistics

• Total Topics
633737
• Total Posts
3013607
×

## Important Information

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!