Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

192 Neutral

About synthetix

  • Rank
  1. I have a simple scene rendered in OpenGL that contains grid plane, same as you'd find in any 3D modeling app. I set the initial lookat position using a lookat matrix to point to the center of the scene (0,0). What I want to do is rotate the entire scene with the ground plane around 0,0 without affecting the Z orientation of the "camera." The way it is now, I detect the mouse position and use that to transform the scene via a rotation matrix. However, depending on the angle, the plane will cant fore/aft and won't stay perfectly level. I'm sure I'm missing some math to correct for the angles based on the eye position. I did some searching and found references to atan2(), which appears to be used to correct the angles but I'm not sure how to implement it. Here's what I have: //set up lookat matrix once when app initializes to give some nice perspective glMatrixMode(GL_MODELVIEW); glMultMatrixf(lookat); //rotate view around the scene //x,y,z struct members are from the mouse position glRotatef(userinput.rot.angle, userinput.rot.x, userinput.rot.y, userinput.rot.z);
  2. I'd like to take a grayscale image that contains white blotchy areas and identify the center of these areas. For example, if there's an area that contains pixel values above 240, I want to be able to get the x,y position of the middle of that area. These images explain pretty much what I'm trying to accomplish:   Image containing white areas to be identified: [attachment=19055:track01.png]   And what I'd like to be able to do: [attachment=19056:track02.png]   As you can see, I'd like to figure out the center of these areas so I can mark them. Keep in mind that the areas may be irregularly shaped. Also, there would need to be some way of separating the white blobs so they can be considered separate objects. Maybe they are considered separate only if there's a certain amount of black pixels between them or something.
  3. I need a way to saturate (not desaturate) colors in a GLSL shader. There's code all over the place for desaturating an image. Example: vec3 desaturate(vec3 color, float amount) {     vec3 gray = vec3(dot(vec3(0.2126,0.7152,0.0722), color));     return vec3(mix(color, gray, amount)); } Many suggest converting RGB to HSV space before increasing saturation. However, I don't need to change hue, only saturation. If I pass negative values to the above function, it indeed appears to saturate the image. Is there anything technically wrong about doing it this way? Am I trying to take a dangerous shortcut here?
  4. That's a good idea, but is it generally understood that there's a performance benefit to this? Let's say I want to draw to two separate framebuffer objects. When switching programs between them, I also have to send all the uniforms again. If I just use one shader with if(), I only have to update one uniform (whatever pass I'm on).   The if/else way seems much more convenient, but if it's possible there's a performance benefit to switching programs, that's worth consideration.
  5. I've been playing around with doing multiple render passes in a fragment shader. I have FBOs with attached textures that I bind and then render to. On each pass, the previous rendered texture is available for reading in the fragment shader. I am doing three passes, all with the same shader. I simply update a uniform variable named "pass" between passes, and that variable is linked to if statements that contain what should be done for each pass.   It all works, but I'm wondering if there's a better way to do this. I read that others will use separate shaders altogether, and swap them between passes (by making a call to glLinkProgram, I assume). That seems like it would have more overhead unless they're already compiled and attached. Is this a good approach or am I overlooking something?
  6. I have a simple shader that does some image processing on 2D images (textures) and then renders them at video resolutions like 1920x1080. The problem is that the viewport in the UI through which the user views the render is smaller, say a phone size screen. So, although my render is 1920x1080, the viewport is actually much smaller. The result of this is lots of aliasing in the viewport due to the downscaling of the larger render to the smaller viewing area. What can I do to reduce the aliasing? Is there a standard technique used in this case?
  7. Thanks. I tried the following for the specular calculation and it seemed to fix the problem: vec3 H = normalize(L + E); vec4 Ispec = m_spec * pow(max(dot(N,H),0.0), m_shine); I found some helpful code here: http://http.develope..._chapter05.html Interesting that one of the Phong lighting tutorial on the Khronos site leaves that one line out: http://www.opengl.or...rs/lighting.php
  8. I have a fully GLSL pipeline (no fixed function lighting), and am having some trouble with my Phong shader. I have a scene set up with one light. The issue is that the specular component of the light shows on the front of the model correctly based on where the light is positioned, but also on the back of the model! Diffuse works properly (does not show on the back of the model), so I'm stumped as to why only the specular component is showing the error. Here are a couple frames that show the problem: [attachment=9979:phong01.jpg] [attachment=9980:phong02.jpg] As you can see, diffuse light is not visible on the back of the model, but specular is! What could be causing this? Here are my shaders: /* vertex shader */ attribute vec3 v_position; attribute vec3 v_normal; uniform mat4 mat_p; //projection uniform mat4 mat_mv; //modelview uniform mat4 mat_n; //normal matrix uniform vec3 light_pos[2]; //lights varying vec3 normal; //normal varying vec3 light_dir[2]; varying vec3 eye_vec; void main(){ normal = (mat_n * vec4(v_normal,0.0)).xyz; vec4 newVertex = mat_mv * vec4(v_position,1.0); eye_vec = -newVertex.xyz; //send lights to fragment shader light_dir[0] = light_pos[0] - newVertex.xyz; light_dir[1] = light_pos[1] - newVertex.xyz; gl_Position = mat_p * newVertex; } /* fragment shader */ uniform vec3 c; //color varying vec3 normal; varying vec3 eye_vec; varying vec3 light_dir[2]; void main(){ vec3 N = normalize(normal); vec3 E = normalize(eye_vec); //specify material vec4 m_amb = vec4(0.07,0.02,0.07,1.0); vec4 m_diff = vec4(c.r,c.g,c.b,1.0); vec4 m_spec = vec4(c.r,c.g,c.b,1.0); clamp(m_spec, 0.0, 1.0); float m_shine = 20.0; vec4 finalColor = vec4(0.0, 0.0, 0.0, 0.0); vec3 L = normalize(light_dir[0]); //light position //ambient vec4 Iamb = (m_amb*0.8); //diffuse vec4 Idiff = m_diff * max(dot(N,L), 0.0); Idiff = clamp(Idiff, 0.0, 1.0); //specular vec3 R = normalize(reflect(-L,N)); vec4 Ispec = m_spec * pow(max(dot(R,E),0.0), m_shine); Ispec = clamp(Ispec, 0.0, 1.0); finalColor += Iamb + Idiff + Ispec; gl_FragColor = finalColor; }
  9. synthetix

    Problem with matrix math for camera

    I'm calculating the strafe vector using the cross product of the foward (Z) direction and the up (Y) direction. So, the strafe vector is basically 90 degrees from the forward vector. I then add this to the translation matrix when the user presses one of the WSAD move keys, and then multiply the modelview matrix with it before rendering the scene. I'm using a rotation matrix which contains the rotation angle when the user rotates the camera view left/right on the Y/up axis. All of this works perfectly for either movement or rotation only, but I can't get the two working together!
  10. synthetix

    Problem with matrix math for camera

    Okay, I can grab both the camera's current position vector and direction (the direction it's pointing) vector from the modelview matrix. Assuming I have this data on each loop iteration, how do I get the camera to rotate around its current position as opposed to 0,0,0? I have been reading a lot of tutorials that say you have to rotate the direction vector by the rotation matrix used to rotate the scene so when you apply the translation, it goes in the correct direction. I think I understand that part perfectly well, as I'm able to derive the forward/back direction vector from the modelview matrix, normalize it, and add it to the translation matrix along with the speed value. I do the same for the strafe vector, which is the cross product of the fwd/back vector and the Y direction (currently -1.0). Example: //calculate strafe vector using cross product of Z and Y vec4 direction_strafe; vec4_cross(direction_strafe,direction_move,(vec4){0.0,-1.0,0.0,0.0}); camera_position[z] += (direction_move[z] * speed); //WS keys (fwd/back) camera_position[x] += (direction_strafe[x] * speed); //AD keys (strafe) //construct translation matrix mat4x4 translate; mat4x4_translate(translate, camera_position[x], 0.0, camera_position[z]); //Y is 0.0 since we never go up/down I then multiply the translation matrix by the rotation matrix and drop the result into the modelview matrix. Problem is, when I rotate the scene, it is always rotating around 0,0,0 so when I move around, the camera always rotates around the world's origin and not its own.
  11. synthetix

    Problem with matrix math for camera

    Thanks. I tried that, but now the scene only rotates around the origin (0,0,0). Actually, this gives the same result as pre-multiplying the rotation/translation matrices. Let me just confirm: the "eye point" is the X,Y (assuming Z-up) coordinates of the camera, looking down, correct? And the "look at" point, or the camera's orientation, is the eye point vector multiplied by the rotation matrix, yes? I'm sorry I can't be more helpful. I am really having a hard time grasping this.
  12. I've been struggling with getting the matrix math down for translating and rotating a camera through 3D space. I've got an OpenGL program that places the camera in the middle of a box at 0,0,0. I want to be able to both translate and rotate (first person shooter-style) through the box using the keyboard. I've got it working except I can't translate/rotate and keep the coordinates the same for both (only one or the other). For example, I can move through the box just fine (using the WSAD keys), but if I rotate my view to the left by 90 degrees, "forward" now goes to the right. The problem is, I multiply the modelview matrix by the translation matrix first, and then by the rotation matrix. This works except that by rotating the scene 90 degrees, it throws the translation coordinates off by 90 degrees! In other words, translation occurs under the assumption that the scene has not been rotated. I only need to rotate the scene on the Y axis, Wolfenstein/Doom style. Here is my code so far: mat4x4 mat_model,mat_tran,mat_rot,mat_temp; mat4x4_identity(mat_model); mat4x4_identity(mat_tran); mat4x4_identity(mat_rot); mat4x4_identity(mat_temp); //create translation matrix mat4x4_translate(mat_tran, strafe, 0.0, dolly); //create rotation matrix mat4x4_rotate_Y(mat_rot,mat_temp,-rot_y); //apply the matrices to the modelview matrix mat4x4_mul(mat_temp,mat_tran,mat_rot); mat4x4_dup(mat_model,mat_temp); What am I missing?
  13. Yep, I just verified this. This is with a device running a PowerVR SGX 530 GPU. Now I can't say for sure or not whether other things in the code have any influence over this. I just know that if I write anything after gl_Position, the data for the vertices and normals gets swapped (i.e. OpenGL thinks vertex data is normal data and vice versa). That's without changing anything else, just literally moving up/down two lines of code and recompiling. I indeed did not have to put gl_Position last when the vetex shader was running on an Nvidia chip. It didn't seem to care. I also changed my code from using separate buffers to a single OpenGL buffer to hold both vertices and normals, but that didn't seem to affect this issue.
  14. I just thought I'd post an update to this. The order the attribute variables wasn't actually the problem. The problem was with the vertex shader. I didn't realize this until running it on a device with a PowerVR SGX 530 GPU (the previous GPU was an Nvidia one). The problem was that gl_Position was being written too early in the shader. It should be the last thing written in the shader. Although some drivers may work when gl_Position is written in the middle of the vertex shader, others may fail. This behavior seems to be driver-specific. Here is an updated version that works with every device I've tried: //vertex shader attribute vec3 v_position; attribute vec3 v_normal; varying float lightIntensity; uniform mat4 model; //uniform mat4 view; uniform mat4 proj; void main() { //specify direction of light vec3 light_dir = vec3(0.9,0.8,-3.0); vec4 newNormal = proj * model * vec4(v_normal,0.0); lightIntensity = max(0.0, dot(newNormal.xyz, light_dir)); //gl_Position must come LAST! vec4 newPosition = proj * model * vec4(v_position,1.0); gl_Position = newPosition; }
  15. I figured it out! The problem was the normals needed to precede the vertices when specifying the attribute arrays. So, order matters. Here are the updated portions of the code: //grab locations of attribute vars //array of normals must come before vertices! glEnableVertexAttribArray(0); //normals glBindAttribLocation(prog, 0, "v_normals"); glEnableVertexAttribArray(1); //vertices glBindAttribLocation(prog, 1, "v_position"); And also here: //normals glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); //vertices glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!