Lighting works in world space but not view space

Started by
20 comments, last by AverageJoeSSU 14 years, 10 months ago
First of all, it really kills me to be asking (begging?) you guys for help, but I've been at this for nearly a week now with no luck and I've tried nearly everything I can. I don't feel nearly as guilty asking other people for help though because I haven't taken a course on linear algebra yet so I don't have a formalized mathematical grounding on what I'm doing at all. I am having issues implementing lighting in my deferred shading code. Right now I have reduced it down to the simplest possible problem: Lambertian diffuse lighting with a light vector <0.0, 0.0, 1.0> in world space. Basically I'm trying to take the dot product with the normal vector. The normal vectors are encoded as RGB colored pixels in an offscreen FBO. At first, I stored them in world space (e.g. when one moves the camera around, the pixels on the walls are the same color no matter what -- just like how the light vector is <0.0, 0.0, 1.0> no matter what the orientation of the camera is). This works great. You can move around the camera just fine and the lighting on the environment remains unchanged as it should be. Transition to view space. That's where I am having problems. My lighting code needs to be in view space because I need to use the depth buffer to fetch view space vertex positions in order to do proper lighting calculations (specular, and light vectors that aren't just arbitrary constants like <0,0,1> but actually point in the direction of the light to the lit vertex... know what I'm sayin?). So that's why I _need_ to be in view space for my shader to work. Everything has to be in the same geometric space for the dot product to work out the same way as it did before. Okay. So my normals work fine in view space (I can say this with reasonable confidence). When I draw the color buffer with the encoded normals for my deferred shader, I can move around the camera and the stuff on the left and the stuff on the right are always the same shades of blue, purple, and green that normal maps should be no matter how I orient the camera. First in my vertex shader: normal = normalize(gl_NormalMatrix * gl_Normal); // normal = gl_Normal; // old world space tangent = normalize(gl_NormalMatrix * gl_Color.xyz); // tangent = gl_Color.xyz; // old world space binormal = cross(normal, tangent); Then in my fragment shader: vec3 N = normal; vec3 L = normalize(lightDir); float lambertTerm = max(0.0, dot(N, L)); What worries me is the light vector. Okay, so I have these light vectors defined as world space position vectors and I need to get them into view space. Is it correct so say that in order to "transform" these vectors into view space, I need to multiply them by the "view matrix"? That way everything is in the same geometric space, so the dot product should work out correctly. See, this is where I am not sure. I didn't have linear algebra yet, but based on everything I've read so far, this is how things seem like they should be. But they aren't. I obtain my matrix from the camera using a gluLookAt-type of function. This camera matrix works fine when I pass it to OpenGL using glMultMatrixf when I am rendering my geometry. I can move the camera around with my arrow keys and the scene shifts around accordingly. However, I need to transform my world space light vectors into view space so I multiply them by this same camera matrix: Vector4 light_pos4 = light_pos * test_camera.view_matrix; //Vector4 light_pos4 = light_pos; // world space DeferredShadingPhong.SetUniform("lightView", light_pos4.x, light_pos4.y, light_pos4.z); This is where things blow up. Even though everything _should_ be in the same geometric space, I still get the wrong results. Lighting is view-dependent. In other words, if I move my camera around in my scene, depending on what angle the camera is, the lighting of my scene changes radically. Surfaces that were lit before I rotated the camera aren't lit and vice versa.
---2x John Carmack
Advertisement
Quote:Original post by TheChuckster
What worries me is the light vector. Okay, so I have these light vectors defined as world space position vectors and I need to get them into view space. Is it correct so say that in order to "transform" these vectors into view space, I need to multiply them by the "view matrix"? That way everything is in the same geometric space, so the dot product should work out correctly. See, this is where I am not sure. I didn't have linear algebra yet, but based on everything I've read so far, this is how things seem like they should be. But they aren't.

I obtain my matrix from the camera using a gluLookAt-type of function. This camera matrix works fine when I pass it to OpenGL using glMultMatrixf when I am rendering my geometry. I can move the camera around with my arrow keys and the scene shifts around accordingly. However, I need to transform my world space light vectors into view space so I multiply them by this same camera matrix:

Vector4 light_pos4 = light_pos * test_camera.view_matrix;
//Vector4 light_pos4 = light_pos; // world space
DeferredShadingPhong.SetUniform("lightView", light_pos4.x, light_pos4.y, light_pos4.z);

This is where things blow up. Even though everything _should_ be in the same geometric space, I still get the wrong results. Lighting is view-dependent. In other words, if I move my camera around in my scene, depending on what angle the camera is, the lighting of my scene changes radically. Surfaces that were lit before I rotated the camera aren't lit and vice versa.


I am having the same problem... although i figured out (literally like an hour ago) that i am multiplying my light position by the modelview instead of just the view... which is causing those results... I am going to try what you are doing above when i get home, hopefully that will fix it... if not i will be in the same boat as you.

if you look at my profile you can see my previous posts where i posted my code.. notice in my point light shader what i mean about multing the Lightpos by the model view.


------------------------------

redwoodpixel.com

Yeah. I am just using the view matrix here. I found a gluLookAt implementation on here that generates a 4x4 transformation matrix instead of just taking care of everything behind the scenes and that's what I'm using to get my view matrix. Let me know if you figure anything out, and likewise, I'll let you know if I get anywhere myself.
---2x John Carmack
Cool, I imagine the problem lies in the fact that all of these assets are being deferred and tossed around, and they all need to be in the same space more or less. I'll check back once implement I my changes.

------------------------------

redwoodpixel.com

I did some experimenting and it seems like I can't just multiply my vectors by this matrix.

glPushMatrix();
glLoadIdentity();
Vector4 light_pos = scene_lights[22]->GetPosition();

Matrix4x4 the_view = test_camera.view_matrix;
Vector4 light_pos4 = light_pos * the_view;
Vector4 ray_origin = Vector4(0,0,0,1) * the_view;

//Vector4 light_pos4 = light_pos;
DebugVector(Vector3(light_pos.x, light_pos.y, light_pos.z), Vector3(ray_origin.x, ray_origin.y, ray_origin.z));
glPopMatrix();

As you can see here, I am loading the identity matrix into the model view matrix and attempting to bypass OpenGL's transformation matrix stack by doing the transformation myself and drawing the vector. For reference purpose, the DebugVector() function just draws a line on the screen from the ray origin pointing in the direction of the vector passed to it. I'm just trying to do some visual debugging here. My aim is to get the same result as this code:

glPushMatrix();
Vector4 light_pos = scene_lights[22]->GetPosition();

Matrix4x4 the_view = test_camera.view_matrix;
Vector4 light_pos4 = light_pos;
Vector4 ray_origin = Vector4(0,0,0,1);

//Vector4 light_pos4 = light_pos;
DebugVector(Vector3(light_pos.x, light_pos.y, light_pos.z), Vector3(ray_origin.x, ray_origin.y, ray_origin.z));
glPopMatrix();

The manually transformed vector isn't in the same spot at all. So I am definitely misunderstanding some linear algebra concept here. From a mathematical perspective, how does the transformation stack transform geometry passed to it? Why doesn't multiplying the vectors by the transformation matrix myself produce the same result? I upload a picture of what should be happening here at http://thechuckster.homelinux.com/~chuck/vector_test.png This is from the second code. The first code should ideally produce the same vector, but it's not even close.
---2x John Carmack
I've done some more experimentation. If I take the transpose of the view matrix, the actual vector shows up on the screen if I use the first fragment, but the transformations are still backwards. Before, there wasn't even a vector showing up (maybe it was always behind the camera).

Ideas: Am I using the wrong matrix? Am I doing the wrong matrix operation? Am I storing the matrix in the wrong format? Am I misunderstanding the way OpenGL matrix stacks work? Am I actually getting the right vector and not even realizing it?
---2x John Carmack
Well... i dont know what you are doing in your shaders but you theoretically would be ok if you passed your matrices as Uniforms... you have to if you are bypassing OGLs matrix stack. I dont however, I pass my matrices into the matrix stack. the first test i did was printed all of my matrices at each render call.

the objects matrix, modelview, and projection (i have a directional light that uses an ortho full screen quad function so i wanted to make sure my projection wasnt getting changed). i then would move my camera back and forward to see how they were changing. Now... i notice you render some output, try this. Render your LightPosition values (the supposed viewspace values) to the screen and then move the camera around... if they really are in view space then those values should be changing like crazy... if they arent, then they shouldn't change at all.

------------------------------

redwoodpixel.com

That's the thing. They are changing, which leads me to think that light vector is in view space. However, it's not the same vector. The code I put in my previous code isn't shader code; it is C++ straight out of my 3D engine. This is really frustrating me because the fact that this bug is even here in the first place suggests some fundamental lack of understanding of 3D programming on my part.
---2x John Carmack
Have you tried crunching the numbers yourself for one time through?

capture them for a frame and see if they make sense... post them here too.

------------------------------

redwoodpixel.com

Here's what's going on from a numerical perspective (screenshots of the console output along with a screenshot of what's going on visually)

http://thechuckster.homelinux.com/~chuck/numbers.png

I've tried just about every combination of matrix operations I could (transposing, inverting, pre-multiplying, post-multiplying) but none of them yielded the correct results.
---2x John Carmack

This topic is closed to new replies.

Advertisement