GLSL light in world coordinates

Started by
9 comments, last by BloodLust666 13 years, 12 months ago
How do I convert a light's position into world coordinates so that when I move the camera around the light's direction doesn't move with it? so far I have

varying vec3 lightPosition;
uniform vec3 lightPos;
void main (void)
{
	texCoord 	= gl_MultiTexCoord0.xy;
	lightPosition	= gl_ModelViewMatrix * vec4(lightPos,0.0);
	gl_Position    	= ftransform();
}



varying vec3 lightPosition;

void main(void)
{
        //...
	lightDir = normalize(lightPosition);
	Idiff = max(dot(normal,lightDir),lightAmb);

        gl_FragColor = color*Idiff;
}

-------------------------Unless specified otherwise, my questions pertain:Windows Platform (with the mindset to keep things multi-platform as possible)C++Visual Studio 2008OpenGL with SFML
Advertisement
On the CPU, when you put data into the lightPos uniform, what space/coordinate-system is it in?
If the data is originally in world-space, then you don't have to do any conversion in your shader.
That didn't work either... I orginally had the lightPos uniform in the frag part. Let me step back a bit, what I have is a deferred shader and I'm ready from a texture what the position and normals are at each pixel. Is there a certain format I have to put the light into in order to act correctly with these coordinates?
-------------------------Unless specified otherwise, my questions pertain:Windows Platform (with the mindset to keep things multi-platform as possible)C++Visual Studio 2008OpenGL with SFML
This is part of a vertex shader I use with both options. I just remove or add the comment.

void main()
{
vec3 LightPosition;
LightPosition.x = 12.0;
LightPosition.y = 9.0;
LightPosition.z = 50.0;

// to have the light fixed wrt the object :
// LightPosition = vec3(gl_NormalMatrix * LightPosition);


...

Quote:Original post by BloodLust666
Let me step back a bit, what I have is a deferred shader and I'm read[ing] from a texture what the position and normals are at each pixel.
What space/coordinate-system are the positions and normals in?
i'm pretty sure they're in world coordinates.

In the vertex shader for the geometry pass

normal = gl_NormalMatrix * gl_Normal;
position = gl_ModelViewMatrix * gl_Vertex;

then when that gets interpolated in the frag shader, I just store that in the buffer.
-------------------------Unless specified otherwise, my questions pertain:Windows Platform (with the mindset to keep things multi-platform as possible)C++Visual Studio 2008OpenGL with SFML
Ok, you haven't said which space lightPosition is in when it's set on the CPU side, I'm going to assume world space.
You're converting the position/normal from model-space to view-space, by the looks of things.

Your original code will transform the light from model-space to view-space as well. Which should work.

On the CPU side, you're probably not setting up the GL_MODELVIEW matrix correctly before drawing the light geometry. This matrix is usually a concatenation of the model-to-world matrix (per object) and the world-to-view matrix (per camera).
If the light is defined in world-space, then it's model-to-world matrix is identity, so GL_MODELVIEW should just be set to your camera (world-to-view) matrix only.

[EDIT]
Actually, in your original code, you're using ftransform, which is the same as gl_ProjectionMatrix * gl_ModelViewMatrix * incomingVertex
...which means the GL_MODELVIEW matrix is probably set up to transform your light geo correctly, and you won't also be able to use it to tranform your light position.

It's probably best to just do the light-position transform on the CPU side.
Before you set the lightPos uniform (CPU side), transform this position using your camera (world-to-view) matrix so that lightPos is already in view space.
well the thing about my lights is when I'm rendering with my lights, i'm rendering a quad to a texture and reading the geometry and normal data also from a texture, so all interpolation will be to the quad, which does no good. I need to calculate on the fragment side where the light direction is because that is what holds the true normals and positions at every pixel.
-------------------------Unless specified otherwise, my questions pertain:Windows Platform (with the mindset to keep things multi-platform as possible)C++Visual Studio 2008OpenGL with SFML
Wait, the position of the light shouldn't change per-pixel, it should be uniform (constant for the whole draw call).

To get the direction from the light to each pixel you should be able to use:
position = get data from GBuffer
lightDir = normalize(position - lightPosition);
Shouldn't I multiply the lightPosition by the NormalMatrix? I just noticed, when I render the Normal buffer and I move the camera around, all the normals that point to the right are red, the ones pointing up are green and blue are the ones pointing towards the camera. When I rotate the camera around, those colors change according to the new orientation of the camera. Doesn't that mean that the light position needs to do the same.

Maybe I'm confusing the 2, the position buffer stores all the positions in world coordinates, but the normal buffer has all the normals in viewspace coordinates... Am i getting that right? If that's correct, how do I fix my light to that?
-------------------------Unless specified otherwise, my questions pertain:Windows Platform (with the mindset to keep things multi-platform as possible)C++Visual Studio 2008OpenGL with SFML

This topic is closed to new replies.

Advertisement