• Advertisement
Sign in to follow this  

Problem regarding glNormal and per fragment lighting

This topic is 4208 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello people, I was playing around with a tutorial from CodeSampler.com that demostrates per fragment lighting and came up with the following problem. First of all I'll include the shader code... Vertex Shader:
//The normal for computing our lighting.
varying vec3 normal;

//Position of the vertex
varying vec4 pos;

void main( void )
{
	//Get the normal
	normal = gl_NormalMatrix * gl_Normal;

	//Get the vertex position
	pos = gl_ModelViewMatrix * gl_Vertex;

	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
	gl_TexCoord[0] = gl_MultiTexCoord0;
}

Fragment Shader:
// Inputs
varying vec3 normal;
varying vec4 pos;


vec3 lightDir;
vec4 diffuse = vec4(2.0,2.0,2.0,1.0);
vec4 ambient = vec4(0.01,0.01,0.01,1.0);
float distance;  // Distance between light position and fragment
vec4 finalcolor;
float NdotL;     // Dot product of normal and lightDir

float attenuation;

uniform sampler2D testTexture;

void main( void )
{

	normal=normalize(normal);

	// gl_LightSource[0].position is used to access position 
	// of GL_LIGHT0
	lightDir = normalize(vec3(gl_LightSource[0].position-pos));

	distance = length(vec3(gl_LightSource[0].position-pos));

	NdotL = max(dot(normal,lightDir),0.0);

	// Attenuation -- just like the Red Book does it 

	attenuation = 1.00 / (gl_LightSource[0].constantAttenuation +
					gl_LightSource[0].linearAttenuation * distance +
					gl_LightSource[0].quadraticAttenuation * distance * distance);


	finalcolor = texture2D(testTexture,gl_TexCoord[0].xy);  // Add texel color to vertex color

	finalcolor *= attenuation * (diffuse * NdotL + ambient); // put all the lighting together


	gl_FragColor = finalcolor; 
}

So now if I draw a textured quad with the following code
glBegin(GL_QUADS);
   glNormal3f(0.0f, 0.0f, 1.0f);
   glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 0.0f);
   glTexCoord2f(1.0f, 0.0f); glVertex3f(1.0f, -1.0f, 0.0f);
   glTexCoord2f(1.0f, 1.0f); glVertex3f(1.0f, 1.0f, 0.0f);
   glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f);
glEnd();

everything seems to be working ok and I get the following result (blue sphere shows the light position) Image Hosted by ImageShack.us However, If I give the normal like this glNormal3i(0, 0, 1); I get the following result: Image Hosted by ImageShack.us Finally If I don't specify a normal at all, I get a black quad which seems strange to me since the default normal, as far as I know, is (0, 0, 1). If anyone can help me understand what's going on, I'd really appreciate it. Thanks

Share this post


Link to post
Share on other sites
Advertisement
glNormal3i(0, 0, 1); doesnt equal glNormal3f(0, 0, 1);
the first is int the later float (normals are usually done with floats, using other data type may result in slower performance also)

i believe the default normal is 0,0,0 but u can look this up yourself at the back of the opengl spec in the state table

Share this post


Link to post
Share on other sites
@zedzeek: Thx for your reply. Of course I understand the type difference and the possible speed loss, however, this doesn't explain (to me at least) the weird lighting effect. Especially if one reads what MSDN says.

Quoting MSDN:
Quote:
Byte, short, or integer arguments are converted to floating-point format with a linear mapping that maps the most positive representable integer value to 1.0, and the most negative representable integer value to -1.0.

Quote:
The initial value of the current normal is (0,0,1).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement