Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 13 Aug 2011
Offline Last Active Nov 13 2014 12:13 AM

Posts I've Made

In Topic: Multiple render passes in GLSL: separate shaders or one shader?

05 March 2013 - 02:14 AM

You can e.g. compile the same vertex shader 3 times and link it to 3 different fragment shaders to make 3 different program objects, and all of this can be done once-only at startup, meaning that calls to glLinkProgram at runtime are unnecessary (just make the appropriate glUseProgram call instead).  Also look at GL_ARB_separate_shader_objects (standard in GL4.1+) for another method of doing this.

That's a good idea, but is it generally understood that there's a performance benefit to this? Let's say I want to draw to two separate framebuffer objects. When switching programs between them, I also have to send all the uniforms again. If I just use one shader with if(), I only have to update one uniform (whatever pass I'm on).


The if/else way seems much more convenient, but if it's possible there's a performance benefit to switching programs, that's worth consideration.

In Topic: Specular highlights "penetrating" and appearing on back of object!

12 July 2012 - 08:38 AM

The specular term should also be multiplied by the clamped N.L value.

Thanks. I tried the following for the specular calculation and it seemed to fix the problem:

vec3 H = normalize(L + E);
vec4 Ispec = m_spec * pow(max(dot(N,H),0.0), m_shine);

I found some helpful code here:


Interesting that one of the Phong lighting tutorial on the Khronos site leaves that one line out:


In Topic: Problem with matrix math for camera

06 July 2012 - 04:45 AM

direction_move should be calculated by using the sin/cos of the camera angle. Looks like you have a cross product calculating it?

I'm calculating the strafe vector using the cross product of the foward (Z) direction and the up (Y) direction. So, the strafe vector is basically 90 degrees from the forward vector. I then add this to the translation matrix when the user presses one of the WSAD move keys, and then multiply the modelview matrix with it before rendering the scene.

I'm using a rotation matrix which contains the rotation angle when the user rotates the camera view left/right on the Y/up axis.

All of this works perfectly for either movement or rotation only, but I can't get the two working together! Posted Image

In Topic: Problem with matrix math for camera

05 July 2012 - 04:21 AM

Okay, I can grab both the camera's current position vector and direction (the direction it's pointing) vector from the modelview matrix. Assuming I have this data on each loop iteration, how do I get the camera to rotate around its current position as opposed to 0,0,0? I have been reading a lot of tutorials that say you have to rotate the direction vector by the rotation matrix used to rotate the scene so when you apply the translation, it goes in the correct direction. I think I understand that part perfectly well, as I'm able to derive the forward/back direction vector from the modelview matrix, normalize it, and add it to the translation matrix along with the speed value. I do the same for the strafe vector, which is the cross product of the fwd/back vector and the Y direction (currently -1.0).


//calculate strafe vector using cross product of Z and Y
vec4 direction_strafe;

camera_position[z] += (direction_move[z] * speed); //WS keys (fwd/back)
camera_position[x] += (direction_strafe[x] * speed); //AD keys (strafe)

//construct translation matrix
mat4x4 translate;
mat4x4_translate(translate, camera_position[x], 0.0, camera_position[z]); //Y is 0.0 since we never go up/down

I then multiply the translation matrix by the rotation matrix and drop the result into the modelview matrix. Problem is, when I rotate the scene, it is always rotating around 0,0,0 so when I move around, the camera always rotates around the world's origin and not its own.

In Topic: Problem with matrix math for camera

02 July 2012 - 09:20 PM

"Forward" should modify a direction vector in local coordinates, which can then be transformed by the rotation matrix to generate the approprite translation vector.

Thanks. I tried that, but now the scene only rotates around the origin (0,0,0). Actually, this gives the same result as pre-multiplying the rotation/translation matrices.

Let me just confirm: the "eye point" is the X,Y (assuming Z-up) coordinates of the camera, looking down, correct? And the "look at" point, or the camera's orientation, is the eye point vector multiplied by the rotation matrix, yes?

I'm sorry I can't be more helpful. I am really having a hard time grasping this.