# UnitBrightness = World * UnitScale & Light Falloff * UnitScale?!? [SOLVED]

This topic is 2968 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Bloody internet explorer! let me write this out for a third time as best as I can....

I have a teapot in a room. The teapot is very small and the room is scaled at 1. To make the teapot bigger I multiply the world matrix's scale by 3 before drawing the object so it appears 3 times bigger on screen. The flaw in this is that the object when passed to the lighting shader is treated with 3 times the distance between the mesh and the lights based on where it was relative to them. So although the teapot is inside the room right next to surfaces brightly lit it is treated at three times the distance per pixel from the light. I can multiply the lights falloff by the scale I am multiplying my teapot/world by but because falloff is squared not linear the teapot gets darker as it gets scaled up.

Without shouting unit space... which is just an objects matrix multiplied by the world as far as I am concerned, can anyone give a solution that does not involve modifying a vertex buffer?

Unless you try to modify the vertex buffer (which is not my cup of tea) your always going to have to affect world space. So in a world where we use shaders to draw lights how does anyone justify or rather correct the behaviour of D3DXMatrixScale when applied to shaders?

Don't tell me everyone multiplies the falloff of lighting for objects based on the objects sizes because I really doubt that everyone using squared falloff is multiplying the lights falloff by an inverse squared value before drawing every object so they appear correctly lit regardless of their scale. This is the only logical solution I can contemplate and it sounds so inefficient.

Here is the issue in image form.

[Edited by - EnlightenedOne on August 31, 2010 12:11:32 PM]

##### Share on other sites
I think you need to read up on matrices.

To scale an object up you don't multiply the entire matrix, you just need to change some parts of it, I assume there's a specific method to do this for the D3D matrix, if not you'll have to write it yourself. I use XNA which lets you create a scaling matrix.

For example to create a world matrix I use:

return Matrix.CreateScale(Scale) * Matrix.CreateFromQuaternion(Rotation) * Matrix.CreateTranslation(Position);

##### Share on other sites
I don't quite understand the situation described, but if you're scaling objects and then having lighting problems, make sure you're renormalizing normals.

You can set the appropiate D3D renderstate (or glEnable(GL_NORMALIZE))
If you're using your own vertex shader, that's not gonna work because it's ignored. You'll need, after multiplying the normal by the world matrix, to do outNormal = normalize( worldNormal )

Cheers
Dark Sylinc

##### Share on other sites
Darg

Your matrix statement makes some sense but unfortunately you have assumed an inefficiency without seeing my code I use equivlents of these XNA components to perform this task so that is not where the fault lies. But thanks for the input.

Matias Goldberg

Your normalisation concept sounds interestingly like it might work! I am going to go and play around with my shader. Unfortunately I am using Parallax Relief Mapping with a blend map to draw my textures. So I am using a Binormal, Tangent, Normal Matrix in this instance.

	//Transform the position from view space to homogeneous projection space	OUT.Position = mul(IN.Position, WorldViewProj);	//Compute world space position	float4 WorldPos = mul(IN.Position, World);	//Calculate Binormal and set Tangent Binormal and Normal matrix	float3x3 TBNMatrix = mul(float3x3(IN.Binormal, IN.Tangent, IN.Normal), (float3x3)World);         //Matias, are you suggesting here I now go.        TBNMatrix = Normalise(TBNMatrix);	for(int x=0; x < totalLights; x++)	{		//Compute light direction		OUT.LightDir[x] = LightPos[x] - WorldPos;		//Compute light direction * TBN Matrix		OUT.LightDir[x] = mul(TBNMatrix, OUT.LightDir[x]);	}    	//Compute view direction * TBN Matrix	OUT.ViewDir = mul(TBNMatrix, EyePos - WorldPos); 	//Copy the texture coordinate as is	OUT.TexCoord0 = IN.TexCoord0;

To try and define the problem when I scale the world to draw the teapot it is increased in scale by a factor of x and when the world increases all the distances between lights and the object being drawn bigger are increased by a factor of x, and falloff is a squared. So when the object gets bigger it gets more shadow and further from the light. This shadow can be allieveated by changing the lights falloff by x * falloff amount and could even be corrected by performing (x * inverse of falloff squared?) * falloff amount. But that would mean every object unqiely scaled that is affect would have to have every light in the scene drawing it have this calculation carried out per pass that could be thousands of square values multiplied per frame to correct lighting due to simple scaling! I am sure there is a more efficient route so I will try this normalisation trick out.

I just hope someone knows what I am talking about :p

##### Share on other sites
My solution of multiplying the falloff by the scaling amount is inherently flawed as you cannot control the falloffs extent in multiple directions as you can scale an object so that solution is dead due to lack of robustness.

Here is some code to get you into it, ignore the lack of timing control on the scaling. Also ignore the use of a full mesh to scale after the earlier comments I have simplified the scene down to base components just to show nothing else is affecting things other than the scale.

	ParallaxMapEffect->SetTechnique( "ParallaxMapPointLightBlend" );	D3DXMATRIX ViewStore = ActiveCamera->View;	ParallaxMapEffect->SetMatrix( "WorldViewProj", &(World * ViewStore * Proj) );	ParallaxMapEffect->SetMatrix( "World", &(World) );	ParallaxMapEffect->SetVector( "EyePos", &(D3DXVECTOR4(ActiveCamera->Position.x, ActiveCamera->Position.y, ActiveCamera->Position.z, 1.0f)));		ParallaxMapEffect->SetInt("totalLights", 6);	ParallaxMapEffect->SetVector( "LightPos[0]", &(D3DXVECTOR4(25.0f, 0.0f, 0.0f, 0.0f)));	ParallaxMapEffect->SetVector( "LightPos[1]", &(D3DXVECTOR4(15.0f, 0.0f, 0.0f, 0.0f)));	ParallaxMapEffect->SetVector( "LightPos[2]", &(D3DXVECTOR4(0.0f, 0.0f, 0.0f, 0.0f)));	ParallaxMapEffect->SetVector( "LightPos[3]", &(D3DXVECTOR4(-15.0f, 0.0f, 0.0f, 0.0f)));	ParallaxMapEffect->SetVector( "LightPos[4]", &(D3DXVECTOR4(-25.0f, 0.0f, 0.0f, 0.0f)));	ParallaxMapEffect->SetVector( "LightPos[5]", &(D3DXVECTOR4(0.0f, 10.0f, 0.0f, 0.0f)));	ParallaxMapEffect->SetVector( "LightColor[0]", &(D3DXVECTOR4(1.0f, 0.0f, 0.0f, 1.0f)));	ParallaxMapEffect->SetVector( "LightColor[1]", &(D3DXVECTOR4(0.0f, 1.0f, 0.0f, 1.0f)));	ParallaxMapEffect->SetVector( "LightColor[2]", &(D3DXVECTOR4(0.0f, 0.0f, 1.0f, 1.0f)));	ParallaxMapEffect->SetVector( "LightColor[3]", &(D3DXVECTOR4(1.0f, 1.0f, 0.0f, 1.0f)));	ParallaxMapEffect->SetVector( "LightColor[4]", &(D3DXVECTOR4(0.0f, 1.0f, 1.0f, 1.0f)));	ParallaxMapEffect->SetVector( "LightColor[5]", &(D3DXVECTOR4(1.0f, 1.0f, 1.0f, 1.0f)));		ParallaxMapEffect->SetFloat( "Falloff[0]", 15.0f );	ParallaxMapEffect->SetFloat( "Falloff[1]", 15.0f );	ParallaxMapEffect->SetFloat( "Falloff[2]", 25.0f );	ParallaxMapEffect->SetFloat( "Falloff[3]", 15.0f );	ParallaxMapEffect->SetFloat( "Falloff[4]", 15.0f );	ParallaxMapEffect->SetFloat( "Falloff[5]", 150.0f );	x = x + 0.01;	if (x > 20)	{		x = 0.01;	}	//Begin the shader pass	UINT Pass, Passes;	ParallaxMapEffect->Begin(&Passes, 0);	for (Pass = 0; Pass < Passes; Pass++)	{		//Do other passes here		ParallaxMapEffect->BeginPass(Pass);		D3DXMATRIX temp;		D3DXMatrixIdentity(&World);		ParallaxMapEffect->SetMatrix( "WorldViewProj", &(World * ViewStore * Proj) );		ParallaxMapEffect->SetMatrix( "World", &(World) );		ParallaxMapEffect->CommitChanges();		//Render the Room		RoomMesh->DrawSubset(0); //note that this draws at a scale of 1.		D3DXMatrixScaling(&temp, x, x, x);		D3DXMatrixMultiply(&World, &World, &temp);		ParallaxMapEffect->SetMatrix( "WorldViewProj", &(World * ViewStore * Proj) );		ParallaxMapEffect->SetMatrix( "World", &(World) );		ParallaxMapEffect->CommitChanges();		TeaMesh->DrawSubset(0);		ParallaxMapEffect->EndPass();	}	ParallaxMapEffect->End();    D3DDevice->EndScene();    D3DDevice->Present( NULL, NULL, NULL, NULL );

##### Share on other sites
Oh yeah I forgot to mention

IN.Binormal = normalize(IN.Binormal);
IN.Tangent = normalize(IN.Tangent);
IN.Normal = normalize(IN.Normal);

I really need a way of making just the model get bigger not the world matrix. I tried tampering with the world matrix manually but as the scale changed of course the lighting in the shader did too.

I cracked it!

I had to throw the world twice once for its scalar value and once for the tangent matrix that way I could seperate the scale from the light calculations for the TBN Matrix I would not have gone there was it not for the musings on normalization so thank you Matias Goldberg and Darg for the feedback!

Look at the success!

##### Share on other sites
Don't forget you need to use the inverted transposed world matrix when multiplying normals. Here is the code I use in the shader:

OUT.TangentToWorld[0] = mul(IN.Tangent,		(float3x3)WorldInv);OUT.TangentToWorld[1] = mul(IN.BiNormal,	(float3x3)WorldInv);OUT.TangentToWorld[2] = mul(IN.Normal,		(float3x3)WorldInv);

And in the application:

effect.Parameters["World"].SetValue(world); //SET THE WORLD MATRIXeffect.Parameters["WorldInv"].SetValue(Matrix.Invert(Matrix.Transpose(world))); //SET THE INVERSE WORLD MATRIX

Using just the normal world matrix will give you incorrect normal directions. It's a bit complicated to explain why exactly without going into diagrams, I'm sure you can google the exact reasons.

##### Share on other sites
Dont use precious vertex to (pixel in my case)shader memory for that!

OUT.TangentToWorld[0] = mul(IN.Tangent,		(float3x3)WorldInv);OUT.TangentToWorld[1] = mul(IN.BiNormal,	(float3x3)WorldInv);OUT.TangentToWorld[2] = mul(IN.Normal,		(float3x3)WorldInv);

You can bypass it in this instance by inversing the projection vector in the diffuse calculation rather than calculating the inverse in the vertex shader.

The trade off is minimal speed loss over adding more values into the pass between vertex and pixel shader. I like to cram as many lights and projectors as I can into one pass.

			//Compute the light's attenuation			Attn = min((ProjFalloff[x] * ProjFalloff[x]) / LenSq, 1.0f);			//Compute the diffuse lighting amount			//We need to invert the projection vector because we are drawing against the direction toward the eye.			Diffuse = Attn * saturate(dot(Normal, (IN.ProjVec[x]*-1 ) ));

But thankyou for the fine input you are right that it does have to happen at some stage! My shadow mechanism is using the world inverse but fortunately not a TBN matrix so I loose no memory to pass between the two!

I can perform six diffuse lights and three projections in one pass at the moment. Although its going to be slower soon when I implement soft shadows because I am going to use multiple techniques and offscreen render targets to build the scene :) I look forward to having a depth map for my projections! just literally deciding on the most efficient implementation now. Thinking of using RGB of one offscreen texture to draw in the lightmap textures overlaying the scene then see if I can't limit projections to drawing into those only based on which channel they projector uses.

1. 1
2. 2
3. 3
Rutin
22
4. 4
5. 5

• 10
• 16
• 14
• 9
• 9
• ### Forum Statistics

• Total Topics
632928
• Total Posts
3009273
• ### Who's Online (See full list)

There are no registered users currently online

×