Someone explain this to me...

Started by
4 comments, last by Prozak 20 years ago
Under OpenGL, using Cg for compilation of Vertex & Pixel Shaders, why doesnt this work when I rotate my model? Heres the Vertex Shader Code:
// Vertex Shader

// March 25th 2004

// Copyright  ©  [1999-2004] [Hugo Ferreira] [Positronic Dreams]


struct sOut
{
  float4 pos  : POSITION;
  float4 col  : COLOR;
  float2 tuv  : TEXCOORD0;
};

sOut main(	float4			 pos		: POSITION,
			float2			 tuv		: TEXCOORD0,			
			float3			 normal		: NORMAL,
			
            uniform float4   Kd,
            uniform float4   lp1,				// Light Position

            uniform float4x4 ModelViewProj,
            uniform float4x4 ModelView,
			uniform float4x4 ModelViewIT    )
{
  sOut OUT;
  
  OUT.pos		= mul(ModelViewProj, pos);  
  
  float3 P		= pos.xyz;
  float3 N		= normal;
  
  float4 D		= lp1;
  D[3]=1; 
  
  OUT.pos		= mul(ModelView, D);
  
  float3 L		= normalize( lp1 - P );
  D[0]			= max(dot(N, L), 0);
  
  OUT.tuv		= tuv;  
  OUT.col.x		= D[0];  
  OUT.pos		= mul(ModelViewProj, pos);  
  
  
  return OUT;	
}
Here is the pseudo-code of what Im doing: glPushMatrix(); glTranslatef ( tx, ty, tz); glRotatef ( rx, 1, 0, 0); glRotatef ( ry, 0, 1, 0); glRotatef ( rz, 0, 0, 1); cgGLSetStateMatrixParameter(modelViewMatrix, CG_GL_MODELVIEW_PROJECTION_MATRIX, CG_GL_MATRIX_IDENTITY); cgGLSetStateMatrixParameter(ModelView, CG_GL_MODELVIEW_MATRIX, CG_GL_MATRIX_IDENTITY); cgGLSetStateMatrixParameter(ModelViewIT, CG_GL_MODELVIEW_MATRIX, CG_GL_MATRIX_INVERSE_TRANSPOSE); Draw_Model; glPopMatrix(); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Ok, so I rotate/translate the model, then I set the matrices for the Cg code, then I call the Draw_Model function to render my model. The effect i''m geting is that everything is rendered correctly, but when i rotate the model, the rotation isnt taken into effect.... I''m I passing the light coordinates wrong?... They''re passed in world-space... Help Please! Salsa cooked it, your eyes eat it![Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][My DevDiary]
[Yann L.][Enginuity] [Penny Arcade] [MSDN][VS RoadMap][Humus][BSPs][UGP][NeHe]
Prozak - The GameDever formally known as pentium3id
Advertisement
You're doing a few things I don't quite understand.
  sOut OUT;    OUT.pos		= mul(ModelViewProj, pos);  

You transform the vertex here into OUT.pos...
    float3 P		= pos.xyz;  float3 N		= normal;    float4 D		= lp1;  D[3]=1;     OUT.pos		= mul(ModelView, D);  

... then transform your light position (stored in the temp variable D) into OUT.pos, overwriting the transformed value of pos. Why do you do this? I'm not too familiar with shaders written in Cg, so you may have a reason, but from what I can see you are merely wasting cycles by performing that first transformation of pos, only to overwrite the result with transformed D, which you never use anyway. Also, you are taking a vector which you have said is in world coords (lp1), then transforming it as if it were in local coords. If you want to bring it into model space, you should use your transpose matrix to reverse transform it into model space, then store that result in some place meaningful, and actually use it in your lighting calculations to follow.
  float3 L		= normalize( lp1 - P );  D[0]			= max(dot(N, L), 0);  

Here, instead of using your transformed light position (stored in OUT.pos) you proceed to instead go ahead and use the un-transformed lp1, and the untransformed vector pos.xyz. So your lp1 is in world coords, your pos.xyz is in local coords (having not been transformed), and your normal is in local coords. I can't see how this will give any meaningful result.

What you should do is reverse transform your lighting position into local coords, then calculate your L vector. As it stands now, you are mixing coordinate systems and that brings no good result. Or, you could use your transformed pos.xyz and your lp1 in world coords for the L vector, then transform your normal into world coords as well before performing the dot product.

If you perform the inverse transformation of lp1 before passing it into the shader (bringing it into world coords), then that is one transformation per vertex you save yourself, since the light position can be reverse transformed once per model rather than once per vertex.
  OUT.tuv		= tuv;    OUT.col.x		= D[0];    OUT.pos		= mul(ModelViewProj, pos);    return OUT;	  

Then here, you repeat the transformation of pos into OUT.pos to spit out the final transformed vertex. The first (seemingly redundant) transformation could probably be taken out in favor of this one.

Maybe something like this would work:
  sOut OUT;    float3 P		= pos.xyz;  float3 N		= normal;    float4 D		= mul(ModelViewIT, lp1);  // Reverse transform lp1 into local coords  D[3]=1;     float3 L		= normalize( D - P );  D[0]			= max(dot(N, L), 0);    OUT.tuv		= tuv;    OUT.col.x		= D[0];    OUT.pos		= mul(ModelViewProj, pos);    


return OUT;


[edited by - VertexNormal on March 29, 2004 8:26:29 PM]
Ok, i altered the vertex shader as you said, this was the result i got:
1Mb Animation Video

The shader still doesnt get rendered correctly...

EDIT: Fixed Link...

Salsa cooked it, your eyes eat it![Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][My DevDiary]
[Yann L.][Enginuity] [Penny Arcade] [MSDN][VS RoadMap][Humus][BSPs][UGP][NeHe]
Prozak - The GameDever formally known as pentium3id

[edited by - Prozak on March 30, 2004 3:14:33 AM]
I got some better results with this script, but there is still something fishy about it:
// Vertex Shader// March 30th 2004// Copyright  ©  [1999-2004] [Hugo Ferreira] [Positronic Dreams]struct sOut{  float4 pos  : POSITION;  float4 col  : COLOR;  float2 tuv  : TEXCOORD0;};sOut main(	float4			 pos		: POSITION,			float2			 tuv		: TEXCOORD0,						float4			 normal		: NORMAL,			            uniform float4   Kd,				// Parameters            uniform float4   lp1,				// Light Position 1            uniform float4x4 ModelViewProj,            uniform float4x4 ModelView,			uniform float4x4 ModelViewIT    ){  sOut OUT;    OUT.pos		= mul(ModelViewProj, pos);		// Transfer Vertex into WSpace    float4 N		= normalize(mul(ModelViewProj, normal));	// Transfer Normals into WSpace    float4 D		= normalize( lp1 - OUT.pos );	   D[0]			= max(dot(N, D), 0);    OUT.tuv		= tuv;    OUT.col.rg	        = 0;  OUT.col.b		= D[0];  // Result is Blue      return OUT;	}
The outcome of this shader seems to be dependent of the rotation of the model, so the rotation not being processed has been taken cared of by this code... but the angles between the light source and the model still seem not to be geting calculated correctly...

Salsa cooked it, your eyes eat it![Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][My DevDiary]
[Yann L.][Enginuity] [Penny Arcade] [MSDN][VS RoadMap][Humus][BSPs][UGP][NeHe]
Prozak - The GameDever formally known as pentium3id
Heres a new animation of the problem:
Quad-200Kb

It shows a simple plane, two triangles. Its normals are facing the user, but when the light goes behind the plane, the expected result would be for it to face to a complete black, acordingly to this Vertex Shader:
struct sOut{  float4 pos  : POSITION;  float4 col  : COLOR;  float2 tuv  : TEXCOORD0;};sOut main(	float4			 pos		: POSITION,			float2			 tuv		: TEXCOORD0,						float4			 normal		: NORMAL,			            uniform float4   Kd,				// Parameters            uniform float4   lp1,				// Light Position 1            uniform float4x4 ModelViewProj,            uniform float4x4 ModelView,			uniform float4x4 ModelViewIT  ){  sOut OUT;  float4 N;		// Normal  float4 D;		// Diference between Light and Vertex Positions    OUT.pos		= mul(ModelViewProj, pos);		// Transfer Vertex into WSpace    N				= mul(ModelViewProj, normalize(normal));	// Transfer Normals into WSpace    D				= normalize( lp1 - OUT.pos );    D[0]			= dot(N, D);	// Calculate angle between vectors  D[0]			= max(D[0], 0);	// Negative angles become zero    OUT.col.g		= D[0];		// Shades of Green  OUT.col.rba	= 0;		// Set other colors to zero  OUT.tuv		= tuv;		// Pass texture coordinates (not used)    return OUT;	}
.

Does anyone know what could be going wrong? Maybe I'm not passing the transformation Matrices correctly to the shaders?

Salsa cooked it, your eyes eat it![Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][My DevDiary]
[Yann L.][Enginuity] [Penny Arcade] [MSDN][VS RoadMap][Humus][BSPs][UGP][NeHe]
Prozak - The GameDever formally known as pentium3id

[edited by - Prozak on April 2, 2004 1:41:01 PM]
Try transforming your normal by just the modelview matrix, rather than by the modelview+projection. Including the projection transformation will project the transformed normal to screen coords, possibly removing or degrading critical information. Also, when calculating the lighting direction vector, use the position transformed by just the modelview matrix.

What may be happening is vectors are being deformed by the projection transformation. You should do your lighting in world space, not projected world space. So try this, and see if it helps.

This topic is closed to new replies.

Advertisement