Would this be a good algorithm for shadowing with multiple ligths?

Started by
5 comments, last by B_old 14 years, 11 months ago
Hi, I'm trying to find a simple and fast way of rendering a scene with multiple ligths and shadows. I've googled and looked at books, but all samples and tutorials I can find seems to only deal with one light. Looking at a deferred shading sample, I came up with this: Render geometry buffers (color and or materials, normals and depth) For each light - Render geometry from lights view to light-specific depth buffer - Draw a fullscreen quad with the 'eye geometry data' - Only additive-blend a pixel to backbuffer if the light can 'see' it (that is, it's distance is the same as the value in the light-specific depth buffer) Is this even possible to do? It seems like a fairly sane way of doing it, since it means lighting pixels hit by a light instead of darkening pixels not hit by one. I'm not looking for something overly beautiful, just something fairly simple to implement (and preferably not-so-intensive on the CPU).. All tutorials and book references dealing with multiple light sources are most welcome!
Advertisement
the way we did it so far is by using "traditional" deferred shading techniques mixed with shadowmapping.

The roadmap:
- Render scene to scene buffers (Gbuffer)
- Render shadow maps from each light that is visible and needs an update (shadowmaps)
- Render each light to the lightbuffer, taking in account it's shadowmap (lightbuffer)
- Render a fullscreen quad that mixes the scene buffers with the light buffer (final result)

Does this answer your question? Shadowmapping is a fairly expensive method, since it requires you to rerender the scene for each light.
Thanks alot!

Is there a reason for splitting up your third and fourth step?

You mention shadowmapping is an expensive method, are there any faster methods? I can't think of any way to create shadows without rendering the geometry for each light?
The reason for that is that you need the original color of the albedo map for the calculations of your lighting. No i have no clue i never really thought of that. This is the combining code for example:

shared texture colorMap;shared texture lightMap;sampler colorSampler = sampler_state{    Texture = (colorMap);    AddressU = CLAMP;    AddressV = CLAMP;    MagFilter = LINEAR;    MinFilter = LINEAR;    Mipfilter = LINEAR;};sampler lightSampler = sampler_state{    Texture = (lightMap);    AddressU = CLAMP;    AddressV = CLAMP;    MagFilter = LINEAR;    MinFilter = LINEAR;    Mipfilter = LINEAR;};struct VertexShaderInput{    float3 Position : POSITION0;    float2 TexCoord : TEXCOORD0;};struct VertexShaderOutput{    float4 Position : POSITION0;    float2 TexCoord : TEXCOORD0;};shared float2 halfPixel;VertexShaderOutput VertexShaderFunction(VertexShaderInput input){    VertexShaderOutput output;    output.Position = float4(input.Position,1);    output.TexCoord = input.TexCoord - halfPixel;    return output;}float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{    float3 diffuseColor = tex2D(colorSampler,input.TexCoord).rgb;    float4 light = tex2D(lightSampler,input.TexCoord);    float3 diffuseLight = light.rgb;    float specularLight = light.a;    return float4((diffuseColor * diffuseLight + specularLight),1);}technique Technique1{    pass Pass1    {        VertexShader = compile vs_1_1         VertexShaderFunction();        PixelShader = compile ps_2_0         PixelShaderFunction();    }}


and one of the light (directional) codes:

        //direction of the lightfloat3 lightDirection;//color of the light float3 diffuseColor; //position of the camera, for specular lightshared float3 cameraPosition; //this is used to compute the world-positionshared float4x4 invertViewProjection; // diffuse color, and specularIntensity in the alpha channelshared texture colorMap; // normals, and specularPower in the alpha channelshared texture normalMap;//depthshared texture depthMap;sampler colorSampler = sampler_state{    Texture = (colorMap);    AddressU = CLAMP;    AddressV = CLAMP;    MagFilter = LINEAR;    MinFilter = LINEAR;    Mipfilter = LINEAR;};sampler depthSampler = sampler_state{    Texture = (depthMap);    AddressU = CLAMP;    AddressV = CLAMP;    MagFilter = POINT;    MinFilter = POINT;    Mipfilter = POINT;};sampler normalSampler = sampler_state{    Texture = (normalMap);    AddressU = CLAMP;    AddressV = CLAMP;    MagFilter = POINT;    MinFilter = POINT;    Mipfilter = POINT;};struct VertexShaderInput{    float3 Position : POSITION0;    float2 TexCoord : TEXCOORD0;};struct VertexShaderOutput{    float4 Position : POSITION0;    float2 TexCoord : TEXCOORD0;};shared float2 halfPixel;VertexShaderOutput VertexShaderFunction(VertexShaderInput input){    VertexShaderOutput output;    output.Position = float4(input.Position,1);    //align texture coordinates    output.TexCoord = input.TexCoord - halfPixel;    return output;}float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{    //get normal data from the normalMap    float4 normalData = tex2D(normalSampler,input.TexCoord);    //tranform normal back into [-1,1] range    float3 normal = 2.0f * normalData.xyz - 1.0f;    //get specular power, and get it into [0,255] range]    float specularPower = normalData.a * 255;    //get specular intensity from the colorMap    float specularIntensity = tex2D(colorSampler, input.TexCoord).a;        //read depth    float depthVal = tex2D(depthSampler,input.TexCoord).r;    //compute screen-space position    float4 position;    position.x = input.TexCoord.x * 2.0f - 1.0f;    position.y = -(input.TexCoord.x * 2.0f - 1.0f);    position.z = depthVal;    position.w = 1.0f;    //transform to world space    position = mul(position, invertViewProjection);    position /= position.w;        //surface-to-light vector    float3 lightVector = -normalize(lightDirection);    //compute diffuse light    float NdL = max(0,dot(normal,lightVector));    float3 diffuseLight = NdL * diffuseColor.rgb;    //reflexion vector    float3 reflectionVector = normalize(reflect(-lightVector, normal));    //camera-to-surface vector    float3 directionToCamera = normalize(cameraPosition - position);    //compute specular light    float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);    //output the two lights    return float4(diffuseLight.rgb, specularLight) ;}technique Technique0{    pass Pass0    {        VertexShader = compile vs_2_0 VertexShaderFunction();        PixelShader = compile ps_2_0 PixelShaderFunction();    }}


please don't mind the messy code ;) but this is how we did it ;)
@FeverGames:
Do the specular highlights always look white with your approach?
Quote:Original post by B_old
@FeverGames:
Do the specular highlights always look white with your approach?


yes unfortunatelly it does but i thought it might be a good starter for you and your idea's. There are more approaches to light buffers, if i remember correct both the ShaderX (5 or 6) series and the GPU gems (prob 3) talked about it, but I don't have these books with me here in Canada.

Any other questions i am glad to try to help.
Ah, somehting about the deferred rendering used in Tabula Rasa can be found in GPU gems 3 (which can be read online on nvidia's site).
Apparently they use 2 rendertargets for the accumulation. One for diffuse lighting and the other one for specular.

This brings me to another, only slightly related question:
Is texture.Sample(sampler, coords).a cheaper than texture.Sample(sampler, coords).rgba in terms of bandwidth or doesn't it make a difference? I hope I am not distracting from the original topic too much.

This topic is closed to new replies.

Advertisement