Jump to content
  • Advertisement
Sign in to follow this  
Sepultang

OpenGL simple Cg shader doesn't work

This topic is 4643 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've implemented a very simple per-vertex lighting shader in Cg just to try out some stuff, but I can't make it to work properly! I have a cube with my shader, it all works until I start spinning the cube, then the "lighted side" of the cube spinns as well, it is as if the cube first gets it's lighting done and then rotates instead of vice versa. I spinn the cube using glRotatef and after that I call cgGLSetStateMatrixParameter to fetch the modelview/projection-matrices from OpenGL to my shader. Here's my shadercode:
//-----vertex shader-----
struct Indata
{
	float4 pos		: POSITION;
	float3 normal	: NORMAL;
};

struct Outdata
{
	float4 pos		: POSITION;
	float4 col		: COLOR0;
};

Outdata main(Indata IN, uniform float4x4 projectionMatrix, 
			uniform float4x4 modelViewMatrix, 
			uniform float4x4 modelViewMatrixIT, 
			uniform float4 lightPos) //lightPos in object space
{
	Outdata OUT;
	OUT.pos = mul(modelViewMatrix, IN.pos);
	
	// Transform normal from model-space to view-space.
	float3 normalVec = normalize(mul(modelViewMatrixIT, float4(IN.normal, 0.0)).xyz);

	// Calculate lightVec from light pos and vert pos
	float4 lightpos  = mul(modelViewMatrix, lightPos);
	float3 lightVec = normalize((lightpos - OUT.pos).xyz);

	// Calculate diffuse component.
	float diffuse = dot(normalVec, lightVec);

	// Use the lit function to compute lighting vector for diffuse color
	float4 lighting = lit(diffuse, 0, 32);
	
	OUT.col.rgb = lighting.y * float3(0.0, 0.0, 1.0);
	OUT.col.a = 1.0f;
	OUT.pos = mul(projectionMatrix, OUT.pos);
	return OUT;
}

//-----fragment shader-----
struct Outdata
{
	float4 col		: COLOR0;
};


Outdata main(Outdata IN)
{
	Outdata OUT;
	OUT.col = IN.col;
	return OUT;
}

Share this post


Link to post
Share on other sites
Advertisement
Your comments say that the input lightPos is in object space (the same as model space). You transform lightPos, the incoming position, and the incoming normal to view space and compute the lighting. For the effect you are seeing, your application is probably not changing the object-space values for lightPos.

Your shader seems to be doing a lot of transformations. My lighting shaders pass in all the parameters in model space (vertex position, vertex normal, light position, light direction, camera position). All lighting is done in model space, so there are no transformations to be applied other than [1] transform the vertex position to clip-space coordinates using the model-view-projection matrix and [2] transform "modelPosition - lightPosition" by the model matrix (model-to-world transformation) to compute attenuation coeffients (the transformation is necessary if the model matrix has nonunit scaling factors). Naturally, if you are storing the light position/direction and camera position in world coordinates, you need to apply the geometry's inverse model matrix (world-to-model transformation) to these and report the results to the cgGLSet* calls for setting the "uniform" values.

Share this post


Link to post
Share on other sites
When I think about it, my uniform float4 lightPos is not really in object space but in world space, (I have the cube in origo so I though it was ok, but then I rotated it, so it's not i model space any more...)
But to transform my light to model space I need the (inverse) model matrix and not the modelView matrix, right? But I don't seem to be able to get that from OpenGL.

Share this post


Link to post
Share on other sites
You do not want to pass to your shader the world light position, the inverse model matrix, and then transform the world light position to model space. This would occur for *each* vertex of the mesh. What you want to do in your own renderer code is transform the world light position to model space *once*, and pass this to OpenGL by setting the uniform constant for the model light position. In my own graphics code, each of my geometry objects stores a Transformation object. This object can be given a world position (world vector) and computes the model position (model vector).

Share this post


Link to post
Share on other sites
You're right, about doing the world->model transformation per vertex being very inefficient, of course. I just wanted the shader up and running so I wasn't thinking that much about it. Anyway, now I save a transformation matrix with each object in the scene that I can get the inverse from to use on my lights before sending them to the shader (similar to your approach?). And my shader finally works :), thanks for the help.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!