Shader lighting

Started by
2 comments, last by Muhammad Haggag 18 years, 5 months ago
I've been spending a long time trying to figure this out, but perhaps I've been looking at the code for too long. I'm using a vertex shader and pixel shader to render my planet scene. Now, given the code below, I can have it accurately render full lighting (bLit = false), but having it process lighting is proving to be a conundrum. Here's the breakdown : g_mWorld - world matrix built per planet, uploaded to shader per planet g_mView - the camera view matrix, built and uploaded to shader per frame g_mProjection - the projection matrix, built and uploaded to shader per frame g_LightPosition - position of light source, in worldspace coordinates As all I'm rendering currently is a sphere mesh, each vertex has a position, a normal, and texture coordinates. Very simple, very basic. The lighting model is a point source, i.e. a light shines to infinite range in all directions. Given the code below, I cannot get the planets to light up properly. I've checked that the light position is right, if I leave the planets unlit, they appear textured properly and in the right locations. Somehow, though, they do not get lit properly. Can anyone take a quick look at my shader code and see if there's something obvious that I'm simply missing? Keep in mind that this is using an Effects class (thank you, D3DX).

float4 g_LightPosition;
float4x4 g_mView;
float4x4 g_mWorld;
float4x4 g_mProjection;
texture g_PlanetTexture;

sampler MeshTextureSampler = sampler_state
{
    texture = <g_PlanetTexture>;
    mipfilter = POINT;
    minfilter = LINEAR;
    magfilter = LINEAR;
};

struct VS_OUTPUT
{
	float4 Position : POSITION;
	float2 TextureUV : TEXCOORD0;
	float4 Color : COLOR0;
};

struct PS_OUTPUT
{
	float4 Color : COLOR;
};

VS_OUTPUT RenderSimplePlanetVS(float4 Pos : POSITION, float4 Normal : NORMAL, float2 Tex : TEXCOORD0, uniform bool bLit)
{
	VS_OUTPUT Output;
	float4x4 worldviewproj;
	float4 worldNormal;
	float4 p;
	float f;
	
	worldviewproj = mul(mul(g_mWorld, g_mView), g_mProjection);
	// turn the vertex normal to world space
	worldNormal = normalize(mul(Normal, g_mWorld));
	// turn the position to world space
	p = mul(Pos, g_mWorld);
	// get the actual position
	Output.Position = mul(Pos, worldviewproj);
	// Color is the light modifier of the vertex
	if (bLit)
	{
		f = max(0, dot(normalize(g_LightPosition - p), worldNormal));
		Output.Color.r = f;
		Output.Color.g = f;
		Output.Color.b = f;
	}
	else
	{
		Output.Color.r = 1.0f;
		Output.Color.g = 1.0f;
		Output.Color.b = 1.0f;
	}
	Output.Color.a = 1.0f;
	Output.TextureUV = Tex;

	return Output;
}

PS_OUTPUT RenderSimplePlanetPS(VS_OUTPUT In)
{
	PS_OUTPUT Output;
	
	Output.Color = tex2D(MeshTextureSampler, In.TextureUV) * In.Color;
	
	return Output;
}


Thanks.
- Pfhoenix
Advertisement
Hi,

replace
worldNormal = normalize(mul(Normal, g_mWorld));

by
worldNormal = normalize(mul(Normal, (float3x3)g_mWorld));


Because if you have a translation in your matrix, it will screw up your normal (a normal should only be rotated / scaled, but never moved. It's not a point, it's a direction)

And also, you're computing the worldViewProj matrix for each vertices ... compute it and send it once for each model, it will save some computing in you VS ^^
That worked! Great, thanks. =)
- Pfhoenix
Quote:Original post by paic
Because if you have a translation in your matrix, it will screw up your normal (a normal should only be rotated / scaled, but never moved. It's not a point, it's a direction)

A normal should only be rotated, not scaled or translated. In general, a normal is transformed using the transpose of the inverse of the matrix. However, if the matrix is orthonormal - which is the case for the upper-left 3x3 matrix with rotation only - then its transpose is also its inverse. So the transposition is cancelled out by the inversion, and you end up being able to transform the normal using the matrix directly.

This topic is closed to new replies.

Advertisement