Jump to content
  • Advertisement
Sign in to follow this  
jad_salloum

HLSL more than 1 light in shader

This topic is 4676 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi guys i just want to know how i could put 2 or 3 lights together in the same shader and is it possible to pass multiple effects for the same object ???

Share this post


Link to post
Share on other sites
Advertisement
You can achieve the effect of multiple lights through a shader by differen ways.
Each method has advantages and drawbacks.

One would be to just pack all 3 lights into the same shader. Do the calculations 3 times and just pass unique arguments for each light. Then add the 3 resulting pixel colors together and return the final color.
The disadvantage of this approach is, that you will have a specific solution for a specific problem, so you won't be very flexible this way, taken you just have 2 lights in a scene or 4 in another. Another drawback is the more complex shader and thus a longer execution time, probably.

You could also just write a generic shader for 1 light only and do it the multipass way. Render three times, don't clear the buffers and blend the resulting color buffers. Drawback is, that in a large scene you will have to push a lot of geometry overhead over the bus, which may easily result into slower frame time again.

Another approach is the so called Deferred Lighting. Create a framebuffer object, map it to a texture, render the geometry once and pass the per-pixel normals through a shader to the color values of your mapped framebuffer object. Then have generic shader for a single light and just draw a fullscreen quad (two triangles) with the just created "fullscreen normal map" as an argument. Repeat this last step for each light and blend the results.
You will have only little overhead here and no geometry overhead at all. Nevertheless, if you want to pass additional attributes such as per-pixel specular exponents or height values for parallax mapping you will have to create more than one framebuffer object, resulting in geometry overhead and you might easily reach the limits of bindable texture objects on older hardware...

It's up to you to choose. What do you want to use it for, anyway?
If it's for a demo project, I would probably go for solution 1). For a complete game use 2) or 3), though 2) is probably most often used.

There might be additional appropriate solutions I forget now, so better wait for more replies...

Share this post


Link to post
Share on other sites

Quote:
Original post by ZMaster

Another approach is the so called Deferred Lighting. Create a framebuffer object, map it to a texture, render the geometry once and pass the per-pixel normals through a shader to the color values of your mapped framebuffer object. Then have generic shader for a single light and just draw a fullscreen quad (two triangles) with the just created "fullscreen normal map" as an argument. Repeat this last step for each light and blend the results.
You will have only little overhead here and no geometry overhead at all. Nevertheless, if you want to pass additional attributes such as per-pixel specular exponents or height values for parallax mapping you will have to create more than one framebuffer object, resulting in geometry overhead and you might easily reach the limits of bindable texture objects on older hardware...




Thanxs for ur reply ZMaster it is gr8 to know all these ways but i think that the 3rd way is more approperiate for me coz i have a big terarian with lots of vertexs, i am using multiple ligths because when i use 1 light only a part of my terarian is shown and the other parts looks with transperancy please check the picture :
http://tinypic.com/javz29.jpg

but according to this way i don't know how to implement it do u have any post or any thing that could help me doing it ??

and please check my shader code to see if the problem is in it .

Quote:

float4x4 worldMatrix : WORLD; // World matrix for object
float4x4 worldViewProjection : WORLDVIEWPROJECTION; // World * View * Projection matrix
float4 lightDirection; // Direction of the light
float4 eyeVector; // Vector for the eye location

texture SceneTexture;

sampler SceneSampler =
sampler_state
{
Texture = <SceneTexture>;
};


struct PER_PIXEL_OUT
{
float4 Position : POSITION;
float2 TexCoords : TEXCOORD0;
float3 LightDirection : TEXCOORD1;
float3 Normal : TEXCOORD2;
float3 EyeWorld : TEXCOORD3;
};

PER_PIXEL_OUT TransformSpecularPerPixel(float4 pos : POSITION, float3 normal : NORMAL,
float2 uv : TEXCOORD0)
{
PER_PIXEL_OUT Output = (PER_PIXEL_OUT)0;

// Transform position
Output.Position = mul(pos, worldViewProjection);

// Store uv coords
Output.TexCoords = uv;

// Store the light direction
Output.LightDirection = lightDirection;

// Transform the normals into the world matrix and normalize them
Output.Normal = normalize(mul(normal, worldMatrix));

// Transform the world position of the vertex
float3 worldPosition = normalize(mul(pos, worldMatrix));

// Store the eye vector
Output.EyeWorld = normalize(eyeVector - worldPosition);

// Return the data
return Output;
}

float4 TextureColorPerPixel(
float2 uvCoords : TEXCOORD0,
float3 lightDirection : TEXCOORD1,
float3 normal : TEXCOORD2,
float3 eye : TEXCOORD3) : COLOR0
{
// Normalize our vectors
float3 normalized = normalize(normal);
float3 light = normalize(lightDirection);
float3 eyeDirection = normalize(eye);
// Store our diffuse component
float4 diffuse = saturate(dot(light, normalized));

// Calculate specular component
float3 reflection = normalize(2 * diffuse * normalized - light);
float4 specular = pow(saturate(dot(reflection, eyeDirection)), 8);

float4 textureColorFromSampler = tex2D(SceneSampler, uvCoords);
// Return the combined color
return textureColorFromSampler * diffuse + specular;
};

technique RenderSceneSpecularPerPixel
{
pass P0
{
VertexShader = compile vs_1_1 TransformSpecularPerPixel();
PixelShader = compile ps_2_0 TextureColorPerPixel();
}
}






Share this post


Link to post
Share on other sites
jad_salloum,

could you please post your implementation code for the shader? I'm having trouble implementing my own shader file.

Thanks,

Devin

Share this post


Link to post
Share on other sites
Quote:
Original post by jad_salloum

Thanxs for ur reply ZMaster it is gr8 to know all these ways but i think that the 3rd way is more approperiate for me coz i have a big terarian with lots of vertexs, i am using multiple ligths because when i use 1 light only a part of my terarian is shown and the other parts looks with transperancy please check the picture :
http://tinypic.com/javz29.jpg

but according to this way i don't know how to implement it do u have any post or any thing that could help me doing it ??


Yes, there's some information available on the net. There's also an article about it in "Real-time rendering", if you own that book.

Here are some links:
Paul's Projects
Nvidia Dev Page
ATI Dev Page

The first uses the OpenGL API. Nevertheless, look at the code, it basically stays the same, except for the API calls.
You might find even more on developer.nvidia.com.

By the way, just because your terrain is so big, it doesn't mean that you can't go for approach 2. Arrange your terrain in a quad- or octree and just render the cells, that are affacted by the current light source. Also make sure to "cull" light sources, which do not affact any of the visible terrain (frustum culling).
Using this, you might be well down to only 1 or 2 light sources per frame and much less geometry data.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!