Jump to content
  • Advertisement
Sign in to follow this  
d h k

OpenGL CG: Simple shader performing per-pixel lighting just won't work [ANY IDEAS? PLEASE?]

This topic is 3985 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Yet another CG-related thread by me, I know... :) I couldn't find a better-suited forum than this one here at gamedev.net, I tried the official NVidia one, but they don't even have a subcategory for CG. And help has been great so far! Anyways, to the problem: I have a very simple pixel shader that is supposed to calculate lighting values for each pixel. But it just renders blackness. I use OpenGL and CG 2.0. Take a look: lighting.cg
float4 main (	uniform sampler2D texture : TEXUNIT0,
				float2 texture_coordinates : TEXCOORD0,
				uniform float3 light_vector,
				uniform float3 normal_vector ) : COLOR
{
	// look the color of our pixel up from the texture
	float3 texture_color = tex2D ( texture, texture_coordinates ).rgb;

	// calculate diffuse-factor
	float diffuse = dot ( normal_vector, light_vector );

	// return calculated rgb and full alpha
	return float4 ( diffuse * texture_color, 1.0f );
}







This shader compiles fine. Let's see how I implement it: main.cpp
void draw ( void )
// draws everything - called every frame
{
	// do unrelated stuff here (ie. camera movement)
	
	// activate texture
	glActiveTextureARB ( GL_TEXTURE0_ARB );
	glEnable ( GL_TEXTURE_2D );
	glBindTexture ( GL_TEXTURE_2D, texture[0].id );

	glBegin ( GL_TRIANGLES );

	for ( int i = 0; i < 2; i++ ) 
	// loop through both triangles in the scene
	{
		// calculate and pass normal of triangle to shader
		cvector triangle_normal = triangle.return_normal ( );
		normalize ( triangle_normal );
		cgGLSetParameter3f ( normal_vector_parameter, triangle_normal.x, triangle_normal.y, triangle_normal.z );

		// calculate and pass vector from the middle of the triange to the light to the shader
		cvector light_vector;
		light_vector.set ( triangle.return_midpoint ( ), light_position );
		normalize ( light_vector );
		cgGLSetParameter3f ( light_vector_parameter, light_vector.x, light_vector.y, light_vector.z );

		for ( int j = 0; j < 3; j++ )
		// loop through each vertex of the triangle
		{
			// print debug information
			printf ( "t: %d v: %d lv.x: %f lv.y: %f lv.z: %f nv.x: %f nv.y: %f nv.z: %f\n", i, j, light_vector.x, light_vector.y, light_vector.z, triangle_normal.x, triangle_normal.y, triangle_normal.z );
		
			// bind first texture
			glMultiTexCoord2fARB ( GL_TEXTURE0_ARB, triangle.vertex[j].u, triangle.vertex[j].v );

			// Specify the vertex coordinates
			glVertex3f ( triangle.vertex[j].x, triangle.vertex[j].y, triangle.vertex[j].z);
		}
	}

	glEnd ( );
}







Seems logical, doesn't it? I calculate the normal of the triangle and the vector from the middle of the triangle to the light, normalize both and pass them to the shader. The shader then takes the dot-product of both values and multiplies that to the color from the texture. That's the theory that should work if I'm not mistaken. Since I've been done with school my math has been somewhat rusty and I'm working on refreshing it at the moment. As you probably noticed, I print debug information. With that, I can tell that my math works perfectly fine. I have two triangles in the scene and the output that is created looks like this:
t: 0 v: 0 lv.x: -0.471759 lv.y: 0.319579 lv.z: 0.821774 nv.x: 0.000000 nv.y: 0.000000 nv.z: -1.000000
t: 0 v: 1 lv.x: -0.471759 lv.y: 0.319579 lv.z: 0.821774 nv.x: 0.000000 nv.y: 0.000000 nv.z: -1.000000
t: 0 v: 2 lv.x: -0.471759 lv.y: 0.319579 lv.z: 0.821774 nv.x: 0.000000 nv.y: 0.000000 nv.z: -1.000000
t: 1 v: 0 lv.x: -0.592559 lv.y: 0.471628 lv.z: 0.653024 nv.x: 0.000000 nv.y: 0.000000 nv.z: -1.000000
t: 1 v: 1 lv.x: -0.592559 lv.y: 0.471628 lv.z: 0.653024 nv.x: 0.000000 nv.y: 0.000000 nv.z: -1.000000
t: 1 v: 2 lv.x: -0.592559 lv.y: 0.471628 lv.z: 0.653024 nv.x: 0.000000 nv.y: 0.000000 nv.z: -1.000000
The values for each vertex in the triangle are the same because I do the calculations for light_vector and normal only once for each triangle. So that's right - for real per-pixel lighting I would have to move these so they are being done for each vertex. I know that, but it's not the problem now. With that, the debug will return individual values which are still correct. I also know that the shader works right, because when I add a " + 0.4f" for example in the last line of the shader where I return the float4 for the pixel-color it displays it in all gray - similar to an ambient color. Thus I conclude that the problem lies in the calculation of the float diffuse in the shader. It seems to be zero always and canceling out the texture (if I comment the diffuse-term out in the last line, it renders the unlit texture, so that part is also working - it really has to be the diffuse-term). What's wrong with it? Thanks a bunch ahead of time! [Edited by - d h k on January 12, 2008 7:16:07 AM]

Share this post


Link to post
Share on other sites
Advertisement
try inverting the light_vector... (-light_vector)

usually u calculate the lightvector in the shader and only send the lghtposition to it... then u just take (vertexpos-lightpos) and u get the correct lightvector for each vertex

Share this post


Link to post
Share on other sites
I tried inverting the light_vector (and the normal as well). Doesn't help any.

Thanks for your answer though. Any more help?

Share this post


Link to post
Share on other sites
Yes, if I return 1.0f for all four elements of the float4 COLOR from the shader, I do get white instead of black. In my original post, I tried to point that out when talking about how when I change the return-line with what is there PLUS 0.4f, I get a dark gray and when I comment the float diffuse out of the return-equation, I also get the correct normal texture without lighting. ;)

Share this post


Link to post
Share on other sites
Quote:
Original post by d h k
Yes, if I return 1.0f for all four elements of the float4 COLOR from the shader, I do get white instead of black. In my original post, I tried to point that out when talking about how when I change the return-line with what is there PLUS 0.4f, I get a dark gray and when I comment the float diffuse out of the return-equation, I also get the correct normal texture without lighting. ;)


well either the light vector or normal is wrong.. try writing in their values manually...

Share this post


Link to post
Share on other sites
Okay, I tried passing these vectors manually:


0 0
light_vector = 0 normal_vector = 0
1 -1


But the triangles stay gray. I also tried all other combinations of +/-1 as z-coordinate in both vectors (ie. 0/0/1 and 0/0/1 or 0/0/-1 and 0/0/1 etc.) - doesn't change a thing.

Share this post


Link to post
Share on other sites
UPDATE:

Okay, due to the lack of response, I have started playing around some more and have come to the conclusion, that the shader must somehow get wrong normal- and light-vector-values from my application. The values are okay inside the application, the math is correct, but somehow something has to be wrong when I'm passing them to the shader.
I know that, because I tried two things. First, I manually set the vectors up IN THE SHADER. Now, all of the sudden, it works (the vectors are fixed, but diffuse isn't zero anymore - with the vectors like they are below, it's just texture, if I change the z-coordinate of the light-vector to something like 0.5f, which would equal a light in a steeper angle, it turns darker). If I fix the vectors manually just like that in the application, it doesn't work. This means that I have to pass my parameters wrongly somehow. SECOND experiment: I replaced both vectors in the shader, hard-coded the normals to always be 0/0/1 as it is in the shader below, but passed the light-vector by binding its value to COLOR0 and then setting the light-position up by calling glColor3f ( ) in the application. This way, the values don't seem to become scrambled or whatever is happening to them now, that way it works absolutely fine.


// look the color of our pixel up from the texture
float3 texture_color = tex2D ( texture, texture_coordinates ).rgb;

normal_vector.x = 0.0f;
normal_vector.y = 0.0f;
normal_vector.z = 1.0f;

light_vector.x = 0.0f;
light_vector.y = 0.0f;
light_vector.z = 1.0f;

// calculate diffuse-factor
float diffuse = dot ( normal_vector, light_vector );

// return calculated rgb and full alpha
return float4 ( diffuse * texture_color, 1.0f );


But I'm really clueless as to why I can not pass two vectors as float3 from app to shader. Any help anyone?

Share this post


Link to post
Share on other sites
You can't set uniform parameters within glBegin/glEnd. Either move the setting of those before the glBegin or make them varying.

Share this post


Link to post
Share on other sites
Did that, still the same.

Looks like this, now:

main.cpp

for ( int i = 0; i < 2; i++ )
{
cvector triangle_normal = triangle.return_normal ( );
normalize ( triangle_normal );
cgGLSetParameter3f ( normal_vector_parameter, triangle_normal.x, triangle_normal.y, triangle_normal.z );

cvector light_vector;
light_vector.set ( triangle.return_midpoint ( ), light_position );
normalize ( light_vector );
cgGLSetParameter3f ( light_vector_parameter, light_vector.x, light_vector.y, light_vector.z );

glBegin ( GL_TRIANGLES );

for ( int j = 0; j < 3; j++ )
{
// print debug information
printf ( "t: %d v: %d lv.x: %f lv.y: %f lv.z: %f nv.x: %f nv.y: %f nv.z: %f\n", i, j, light_vector.x, light_vector.y, light_vector.z, triangle_normal.x, triangle_normal.y, triangle_normal.z );

// bind first texture
glMultiTexCoord2fARB ( GL_TEXTURE0_ARB, triangle.vertex[j].u, triangle.vertex[j].v );

// Specify the vertex coordinates
glVertex3f ( triangle.vertex[j].x, triangle.vertex[j].y, triangle.vertex[j].z);
}

glEnd ( );
}



When I put the cgGLSetParameter-functions back in between glBegin and glEnd and change the uniform to varying in my shader, the shader won't compile anymore. Is it possible that varying variables are only supported for shader model >=2.0 or something? I'm working on a GeForce 4 TI.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!