Jump to content
  • Advertisement
Sign in to follow this  
Norman Barrows

two sided lighting - which algo?

This topic is 500 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

so i've come across two different algos for two sided lighting:

1.  turn cull on, set winding CCW, set shader param invert_normals to false, then draw, then set winding to CW, set invert_normals to true, and draw again.  this seems to be a popular algo.  an example can be found here:

http://www.fairyengine.com/articles/hlsl2sided.htm

 

2. turn cull off, pass camera direction vector as a parameter to the shader. in the shader, if view_dir dot normal > 0, the normal is backfacing, so invert it before doing lighting calculations.  I have only seen this mentioned one place - right here, by our very own Buckeye:

https://www.gamedev.net/topic/482343-draw-2-faces-vs-cull_none-speed/?hl=%2Btwo+%2Bsided+%2Blighting#entry4156668

he started with a single quad with two normals per vertex facing opposite directions, then realized he could do the view_dir dot normal test. in the thread he says its faster than algo #1 in testing. makes sense - just one draw call vs two, no culling vs cull testing everything.twice, and the same number of tris rasterized either way.

 

I haven't looked up view_dir dot normal yet - is it correct?  I assume so, but want to be certain.  sure wish i remembered some of the linear i took. got an A and can't remember jack!

and it is the camera's "look at" direction vector that he's referring to as view direction, right?  that doesn't have to be normalized, does it? don't think so...

any reason not to go with algo #2 ?

anyone heard of algo #2 before, or did Buckeye invent it?

Edited by Norman Barrows

Share this post


Link to post
Share on other sites
Advertisement

If you're using Shader Model 3, it exposes a hardware register that tells you whether you're running a pixel shader for the CW or CCW side of a face, via the VFACE semantic. You can use this to conditionally flip the normal or not, e.g.

float4 ps_main( Interpolator i, float vface : VFACE ) : COLOR0
{
   float3 normal = vface < 0 ? -i.normal : i.normal;
}

and it is the camera's "look at" direction vector that he's referring to as view direction, right?  that doesn't have to be normalized, does it? don't think so...

If you use the per-pixel direction from-pixel-to-camera, then it works, which is (camera_position - pixel_position) or that negated, depending on convention.
If you just use the camera's "forward vector" then it doesn't work.

Share this post


Link to post
Share on other sites

#2 (with the register mentioned by Hodgman, not a camera vector) is the industry-standard way.  A few games that use it:

Star Ocean 4
Star Ocean 5
Phantasy Star NOVA
Silent Scope: Bone Eater
Final Fantasy XV
Many more…


L. Spiro

Share this post


Link to post
Share on other sites
Interpolator i,

I'm not familiar with this syntax. HLSL 4 interpolation? 

Right now i'm doing lighting in the vertex shader, same as the Basic HLSL sample code:

float3 vTotalLightDiffuse = float3(0,0,0);    // Compute simple directional lighting equation
for(int i=0; i<nNumLights; i++ )
    vTotalLightDiffuse += g_LightDiffuse[i] * max(0,dot(vNormalWorldSpace, g_LightDir[i]));
Output.Diffuse.rgb = g_MaterialDiffuseColor * vTotalLightDiffuse + g_MaterialAmbientColor * g_LightAmbient;   
Output.Diffuse.a = 1.0f; 


I'm only using a single directional light (IE the sun or the moon).

Since VFACE only works for pixel shaders, not vertex shaders, it looks like i just need to pass the normal to the pixel shader, move the lighting equations to the pixel shader, and add the flip normal based on VFACE.

But then where does camera angle come in?

These will be plant models made of transparent quads.  I'm not re-mapping normals based on object center yet, so all vertex normals and any interpolated normal at any pixel should all be perpendicular to their triangle's face.  but as you  rasterize across a tri, the angle to the camera changes - giving you your gourard type gradient shading.. without camera angle, i would think that you'd get lit flat shading of faces.

OK, i'm getting confused here. been too long since i wrote a pipeline. I only needed camera angle to determine backface.  VFACE does that now, so i don't need camera angle.  but is it true about what i said about the normals for a tri all being the same, and thus you get flat shading per face?

 

Wait... its because its a directional light, not a point light. So the lightdir is uniform everywhere in the scene.  with a point light, you'd get a change in lightdir from one vertex to the next, and thus a change in lighting from one vertex to the next, and thus a change in interpolated lighting from one pixel to the next. right?

 

Also, should i be concerned about a performance hit moving the lighting equations to the pixel shader?    doing lighting once per pixel instead of just once per vertex?

Edited by Norman Barrows

Share this post


Link to post
Share on other sites

If you use the per-pixel direction from-pixel-to-camera, then it works, which is (camera_position - pixel_position) or that negated, depending on convention. If you just use the camera's "forward vector" then it doesn't work.

works as in: it can determine on a pixel per pixel basis if the normal is backwards? 

Share this post


Link to post
Share on other sites

I'm not familiar with this syntax. HLSL 4 interpolation?

 

Looks like a custom defined struct:

struct Interpolator
{
     float3 vNormal : NORMAL;
     float3 vTangent : TEXCOORD0;

     // ...
}

But then where does camera angle come in?

 

Just like in the vertex-shader? g_LightDir should be available there as well, if not you have to make it available by setting it as a PS constant (don't remember the DX9 syntax that well anymore). From there on your calculation is just the same, you pass the normal from the vertex-shader to pixel-shader, and pretty much copy-paste your lighting code to the pixel-shader.

Share this post


Link to post
Share on other sites
Looks like a custom defined struct:

yeah, that i.normal kind of gives it away. <g>.   but i've never seen that type of shader code.  whats the theory behind it?  per-pixel shading or something?

 

 

Just like in the vertex-shader? g_LightDir should be available there as well

yeah, lightDir is a global, so the pixel shader has access.  basically i just moved all the lighting to the pixel shader.  no noticeable impact on performance.   Got all that in the game now.  instanced 2 sided lighting with wind (fields of grass and plants), non-instanced two sided lighting with wind (small plants), and non-instanced two sided lighting with wind based on distance from ground (trees).  The shader lighting is hooked up to the sun/moon light in the game, and i'm just waiting for the sun to rise in the long term playtest game, so i can adjust the material values sent to the shaders. There's still more i can do: re-map plant normals based on plant center, and make all the shaders do wind based on distance from ground, instead of tex.V value. I switched prairie grasslands to instanced drawing, and it looks like that's the last place i'll require instancing - everything else is fast enough without it. right now i'm at:  60 fps, vsync on. 20k-30k draw calls per frame. ~200,000 plants in visible range (worst case). max of 25K meshes per chunk.  chunks are 300 ft (100m) wide.  6 mesh/texture combos per chunk for instanced drawing (so 6 instance VBs per chunk).  i put a timer on generate_chunk, and found the bottleneck, and fixed it. Now the real time chunk generation from underlying world map data is  lightning fast again. so once again its a truly seamless 2500 mile wide world you can walk across with no load screens or paging off the hard drive. well - at least until you get to an ocean, then you have to build a raft first! <g>.  One surprising thing was that for terrain with just a few plants and for trees, i was able to switch between fixed function and shaders on the fly on a per mesh basis. This was only slow in prairie trerrain, which is why i went to instanced shading there. So for 200,000 plants in vis range i use instancing, and for some fraction of up to 25K meshes per chunk, i just switch between fixed function and shaders on the fly - and I'm not even using a state manager for that yet.

Edited by Norman Barrows

Share this post


Link to post
Share on other sites
yeah, that i.normal kind of gives it away. . but i've never seen that type of shader code. whats the theory behind it? per-pixel shading or something?

 

Pretty much any time you have to return multiple values from a shader stage, you use a struct (I think there's some sort of out-syntax for function attributes, but not sure).

 

So this means per-pixel shading, multiple render-targets, etc... all uses such a struct and return it:

struct GBuffer
{
     float4 albedo : COLOR0;
     float4 normals : COLOR1;
     float4 specular : COLOR2;
}

GBuffer mainPS(VertexOutput o)
{
       GBuffer gbuffer;

       return gbuffer;
}

Besides, it makes sharding state input/output easier (as you see I can reuse the VS-output as PS-input, so if I have to change something it automatically applies to all sources). Other than that, for generated shader code those structures are pretty worthwhile as well, as they make it much easier to generate stage-IO then if you had to use single attributes.

Edited by Juliean

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!