Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

130 Neutral

About TheChuckster

  • Rank
  1. Looking for a tool (preferably open source) that can take an arbitrary mesh (I use OBJ) that contains a 3D model of a game level, and then, the tool builds a BSP tree from the 3D model and calculates the PVS visibility sets from the leaves in the BSP tree. Any suggestions? I'm also open to other approaches than BSP trees for visibility culling. Would you recommend I look at portals?
  2. TheChuckster

    How to look forward the z-axis

    I think your basis vectors need to resemble the identity matrix or you will translate in the wrong direction. It depends on your view matrix though.
  3. This guy has a deferred shading app going where he uses only diffuse lighting and does his calculations in world space, but he also transforms the light vector into view space in order to determine the scissor rectangle of the light in screen space. I'm going to try to pick apart his code and reverse engineer how he's transforming the light vector and see if I can figure anything out. Funny thing is my GPU gets more performance out of forward shading than deferred shading in this demo. Then again, he is using pbuffers and deprecated NV shader extensions along with expensive glCopyTexSubImage2D operations.
  4. I noticed another thing. Right now I am pre-multiplying (no inverses, no transposes), and the vector always points in the same _direction_ relative to the screen regardless of which way my camera is pointing instead of always pointing at the light. If I post-multiply by the transpose, I get the same behavior, _but_ if I post-multiply by the inverse, the vector is completely off-screen. This is weird, since in an orthogonal transformation matrix, shouldn't the inverse equal the transpose? I dotted the basis vectors with each other and throughout testing (barrel rolling, pitching, etc.), the inner products are always zero. Weird. I also noticed that the magnitude of my light vector is changing as a result of the transformation. However, I am always using unit basis vectors in my transformation matrix. Why is there a scaling operation going on? I am only translating and rotating. I just don't get it. Passing the exact same matrix to OpenGL gives me the correct behavior, but when I try to do the exact same matrix operation in my code, things blow up. There's only one thing that could be going wrong: the operation itself. What am I doing differently than OpenGL to my matrix?
  5. Maybe it's an issue with how we implement the barrel roll. Maybe our basis vectors aren't completely orthogonal.
  6. I am at the exact same stage as you are. Barrel rolls break the camera for me, too. That was the closest I could get to having correct lighting.
  7. Yeah. The same incorrect light vector rotation phenomena occurs when I grab the matrix generated by gluLookAt using this code: Vector3 Direction = Position + ViewDir; // global to keep track of camera direction gluLookAt(Position.x, Position.y, Position.z, Direction.x, Direction.y, Direction.z, UpVector.x, UpVector.y, UpVector.z); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); gluLookAt(Position.x, Position.y, Position.z, Direction.x, Direction.y, Direction.z, UpVector.x, UpVector.y, UpVector.z); float matrix_data[16]; glGetFloatv(GL_MODELVIEW_MATRIX, &matrix_data[0]); view_matrix = Matrix4x4(&matrix_data[0]); view_matrix = view_matrix.Transpose(); view_matrix.Dump(); glPopMatrix();
  8. That's because I am using a different right-handed coordinate system so the "basis vectors" of my camera transformation matrix (up, right, and view) are different. I am having the z-axis pointing up. I changed them back to what they should be (giving me the identity matrix) and it didn't change the fact that the rotations screw up the light vector.
  9. Here's what's going on from a numerical perspective (screenshots of the console output along with a screenshot of what's going on visually) I've tried just about every combination of matrix operations I could (transposing, inverting, pre-multiplying, post-multiplying) but none of them yielded the correct results.
  10. That's the thing. They are changing, which leads me to think that light vector is in view space. However, it's not the same vector. The code I put in my previous code isn't shader code; it is C++ straight out of my 3D engine. This is really frustrating me because the fact that this bug is even here in the first place suggests some fundamental lack of understanding of 3D programming on my part.
  11. I've done some more experimentation. If I take the transpose of the view matrix, the actual vector shows up on the screen if I use the first fragment, but the transformations are still backwards. Before, there wasn't even a vector showing up (maybe it was always behind the camera). Ideas: Am I using the wrong matrix? Am I doing the wrong matrix operation? Am I storing the matrix in the wrong format? Am I misunderstanding the way OpenGL matrix stacks work? Am I actually getting the right vector and not even realizing it?
  12. I did some experimenting and it seems like I can't just multiply my vectors by this matrix. glPushMatrix(); glLoadIdentity(); Vector4 light_pos = scene_lights[22]->GetPosition(); Matrix4x4 the_view = test_camera.view_matrix; Vector4 light_pos4 = light_pos * the_view; Vector4 ray_origin = Vector4(0,0,0,1) * the_view; //Vector4 light_pos4 = light_pos; DebugVector(Vector3(light_pos.x, light_pos.y, light_pos.z), Vector3(ray_origin.x, ray_origin.y, ray_origin.z)); glPopMatrix(); As you can see here, I am loading the identity matrix into the model view matrix and attempting to bypass OpenGL's transformation matrix stack by doing the transformation myself and drawing the vector. For reference purpose, the DebugVector() function just draws a line on the screen from the ray origin pointing in the direction of the vector passed to it. I'm just trying to do some visual debugging here. My aim is to get the same result as this code: glPushMatrix(); Vector4 light_pos = scene_lights[22]->GetPosition(); Matrix4x4 the_view = test_camera.view_matrix; Vector4 light_pos4 = light_pos; Vector4 ray_origin = Vector4(0,0,0,1); //Vector4 light_pos4 = light_pos; DebugVector(Vector3(light_pos.x, light_pos.y, light_pos.z), Vector3(ray_origin.x, ray_origin.y, ray_origin.z)); glPopMatrix(); The manually transformed vector isn't in the same spot at all. So I am definitely misunderstanding some linear algebra concept here. From a mathematical perspective, how does the transformation stack transform geometry passed to it? Why doesn't multiplying the vectors by the transformation matrix myself produce the same result? I upload a picture of what should be happening here at This is from the second code. The first code should ideally produce the same vector, but it's not even close.
  13. Yeah. I am just using the view matrix here. I found a gluLookAt implementation on here that generates a 4x4 transformation matrix instead of just taking care of everything behind the scenes and that's what I'm using to get my view matrix. Let me know if you figure anything out, and likewise, I'll let you know if I get anywhere myself.
  14. First of all, it really kills me to be asking (begging?) you guys for help, but I've been at this for nearly a week now with no luck and I've tried nearly everything I can. I don't feel nearly as guilty asking other people for help though because I haven't taken a course on linear algebra yet so I don't have a formalized mathematical grounding on what I'm doing at all. I am having issues implementing lighting in my deferred shading code. Right now I have reduced it down to the simplest possible problem: Lambertian diffuse lighting with a light vector <0.0, 0.0, 1.0> in world space. Basically I'm trying to take the dot product with the normal vector. The normal vectors are encoded as RGB colored pixels in an offscreen FBO. At first, I stored them in world space (e.g. when one moves the camera around, the pixels on the walls are the same color no matter what -- just like how the light vector is <0.0, 0.0, 1.0> no matter what the orientation of the camera is). This works great. You can move around the camera just fine and the lighting on the environment remains unchanged as it should be. Transition to view space. That's where I am having problems. My lighting code needs to be in view space because I need to use the depth buffer to fetch view space vertex positions in order to do proper lighting calculations (specular, and light vectors that aren't just arbitrary constants like <0,0,1> but actually point in the direction of the light to the lit vertex... know what I'm sayin?). So that's why I _need_ to be in view space for my shader to work. Everything has to be in the same geometric space for the dot product to work out the same way as it did before. Okay. So my normals work fine in view space (I can say this with reasonable confidence). When I draw the color buffer with the encoded normals for my deferred shader, I can move around the camera and the stuff on the left and the stuff on the right are always the same shades of blue, purple, and green that normal maps should be no matter how I orient the camera. First in my vertex shader: normal = normalize(gl_NormalMatrix * gl_Normal); // normal = gl_Normal; // old world space tangent = normalize(gl_NormalMatrix *; // tangent =; // old world space binormal = cross(normal, tangent); Then in my fragment shader: vec3 N = normal; vec3 L = normalize(lightDir); float lambertTerm = max(0.0, dot(N, L)); What worries me is the light vector. Okay, so I have these light vectors defined as world space position vectors and I need to get them into view space. Is it correct so say that in order to "transform" these vectors into view space, I need to multiply them by the "view matrix"? That way everything is in the same geometric space, so the dot product should work out correctly. See, this is where I am not sure. I didn't have linear algebra yet, but based on everything I've read so far, this is how things seem like they should be. But they aren't. I obtain my matrix from the camera using a gluLookAt-type of function. This camera matrix works fine when I pass it to OpenGL using glMultMatrixf when I am rendering my geometry. I can move the camera around with my arrow keys and the scene shifts around accordingly. However, I need to transform my world space light vectors into view space so I multiply them by this same camera matrix: Vector4 light_pos4 = light_pos * test_camera.view_matrix; //Vector4 light_pos4 = light_pos; // world space DeferredShadingPhong.SetUniform("lightView", light_pos4.x, light_pos4.y, light_pos4.z); This is where things blow up. Even though everything _should_ be in the same geometric space, I still get the wrong results. Lighting is view-dependent. In other words, if I move my camera around in my scene, depending on what angle the camera is, the lighting of my scene changes radically. Surfaces that were lit before I rotated the camera aren't lit and vice versa.
  15. I am using a GLSL shader for per-pixel lighting and bump mapping. I am also rendering stencil shadows. However, I am stuck on trying to figure out the best way to handle multiple lights. The only way I can think of is one light per pass with additive blending. This is because the stencil shadows must only mask out the light source that is casting them (if that makes any sense), but all of the other light sources must be able to pass through the shadow for it to look realistic. Otherwise I'd render eight lights per pass... Also, is there a way to reuse geometry/depth calculations? Right now, the only way I am able to eliminate Z-fighting is if I use the polygon offset feature. It's really inefficient to be doing these calculations over and over again for each light in the scene. Which brings me to my next point, optimization. What are some good ways to optimize this? I'd like to squeeze as many light sources per pass as possible. I'd also like to reuse as many GPU computations as I can. I've heard of using a scissor test for each light source so only the triangles lit by the particular light are rendered each pass. Frustum culling is also a must.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!