• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

synthetix

Members
  • Content count

    30
  • Joined

  • Last visited

Community Reputation

192 Neutral

About synthetix

  • Rank
    Member
  1. I have a simple scene rendered in OpenGL that contains grid plane, same as you'd find in any 3D modeling app. I set the initial lookat position using a lookat matrix to point to the center of the scene (0,0). What I want to do is rotate the entire scene with the ground plane around 0,0 without affecting the Z orientation of the "camera." The way it is now, I detect the mouse position and use that to transform the scene via a rotation matrix. However, depending on the angle, the plane will cant fore/aft and won't stay perfectly level. I'm sure I'm missing some math to correct for the angles based on the eye position. I did some searching and found references to atan2(), which appears to be used to correct the angles but I'm not sure how to implement it. Here's what I have: //set up lookat matrix once when app initializes to give some nice perspective glMatrixMode(GL_MODELVIEW); glMultMatrixf(lookat); //rotate view around the scene //x,y,z struct members are from the mouse position glRotatef(userinput.rot.angle, userinput.rot.x, userinput.rot.y, userinput.rot.z);
  2. I'd like to take a grayscale image that contains white blotchy areas and identify the center of these areas. For example, if there's an area that contains pixel values above 240, I want to be able to get the x,y position of the middle of that area. These images explain pretty much what I'm trying to accomplish:   Image containing white areas to be identified: [attachment=19055:track01.png]   And what I'd like to be able to do: [attachment=19056:track02.png]   As you can see, I'd like to figure out the center of these areas so I can mark them. Keep in mind that the areas may be irregularly shaped. Also, there would need to be some way of separating the white blobs so they can be considered separate objects. Maybe they are considered separate only if there's a certain amount of black pixels between them or something.
  3. I need a way to saturate (not desaturate) colors in a GLSL shader. There's code all over the place for desaturating an image. Example: vec3 desaturate(vec3 color, float amount) {     vec3 gray = vec3(dot(vec3(0.2126,0.7152,0.0722), color));     return vec3(mix(color, gray, amount)); } Many suggest converting RGB to HSV space before increasing saturation. However, I don't need to change hue, only saturation. If I pass negative values to the above function, it indeed appears to saturate the image. Is there anything technically wrong about doing it this way? Am I trying to take a dangerous shortcut here?
  4. That's a good idea, but is it generally understood that there's a performance benefit to this? Let's say I want to draw to two separate framebuffer objects. When switching programs between them, I also have to send all the uniforms again. If I just use one shader with if(), I only have to update one uniform (whatever pass I'm on).   The if/else way seems much more convenient, but if it's possible there's a performance benefit to switching programs, that's worth consideration.
  5. I've been playing around with doing multiple render passes in a fragment shader. I have FBOs with attached textures that I bind and then render to. On each pass, the previous rendered texture is available for reading in the fragment shader. I am doing three passes, all with the same shader. I simply update a uniform variable named "pass" between passes, and that variable is linked to if statements that contain what should be done for each pass.   It all works, but I'm wondering if there's a better way to do this. I read that others will use separate shaders altogether, and swap them between passes (by making a call to glLinkProgram, I assume). That seems like it would have more overhead unless they're already compiled and attached. Is this a good approach or am I overlooking something?
  6. I have a simple shader that does some image processing on 2D images (textures) and then renders them at video resolutions like 1920x1080. The problem is that the viewport in the UI through which the user views the render is smaller, say a phone size screen. So, although my render is 1920x1080, the viewport is actually much smaller. The result of this is lots of aliasing in the viewport due to the downscaling of the larger render to the smaller viewing area. What can I do to reduce the aliasing? Is there a standard technique used in this case?
  7. [quote name='Hodgman' timestamp='1342102180' post='4958403'] The specular term should also be multiplied by the clamped [font=courier new,courier,monospace]N.L[/font] value. [/quote] Thanks. I tried the following for the specular calculation and it seemed to fix the problem: [CODE] vec3 H = normalize(L + E); vec4 Ispec = m_spec * pow(max(dot(N,H),0.0), m_shine); [/CODE] I found some helpful code here: [url="http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter05.html"]http://http.develope..._chapter05.html[/url] Interesting that one of the Phong lighting tutorial on the Khronos site leaves that one line out: [url="http://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/lighting.php"]http://www.opengl.or...rs/lighting.php[/url]
  8. I have a fully GLSL pipeline (no fixed function lighting), and am having some trouble with my Phong shader. I have a scene set up with one light. The issue is that the specular component of the light shows on the front of the model correctly based on where the light is positioned, but also on the back of the model! Diffuse works properly (does not show on the back of the model), so I'm stumped as to why only the specular component is showing the error. Here are a couple frames that show the problem: [attachment=9979:phong01.jpg] [attachment=9980:phong02.jpg] As you can see, diffuse light is not visible on the back of the model, but specular is! What could be causing this? Here are my shaders: [CODE] /* vertex shader */ attribute vec3 v_position; attribute vec3 v_normal; uniform mat4 mat_p; //projection uniform mat4 mat_mv; //modelview uniform mat4 mat_n; //normal matrix uniform vec3 light_pos[2]; //lights varying vec3 normal; //normal varying vec3 light_dir[2]; varying vec3 eye_vec; void main(){ normal = (mat_n * vec4(v_normal,0.0)).xyz; vec4 newVertex = mat_mv * vec4(v_position,1.0); eye_vec = -newVertex.xyz; //send lights to fragment shader light_dir[0] = light_pos[0] - newVertex.xyz; light_dir[1] = light_pos[1] - newVertex.xyz; gl_Position = mat_p * newVertex; }[/CODE] [CODE] /* fragment shader */ uniform vec3 c; //color varying vec3 normal; varying vec3 eye_vec; varying vec3 light_dir[2]; void main(){ vec3 N = normalize(normal); vec3 E = normalize(eye_vec); //specify material vec4 m_amb = vec4(0.07,0.02,0.07,1.0); vec4 m_diff = vec4(c.r,c.g,c.b,1.0); vec4 m_spec = vec4(c.r,c.g,c.b,1.0); clamp(m_spec, 0.0, 1.0); float m_shine = 20.0; vec4 finalColor = vec4(0.0, 0.0, 0.0, 0.0); vec3 L = normalize(light_dir[0]); //light position //ambient vec4 Iamb = (m_amb*0.8); //diffuse vec4 Idiff = m_diff * max(dot(N,L), 0.0); Idiff = clamp(Idiff, 0.0, 1.0); //specular vec3 R = normalize(reflect(-L,N)); vec4 Ispec = m_spec * pow(max(dot(R,E),0.0), m_shine); Ispec = clamp(Ispec, 0.0, 1.0); finalColor += Iamb + Idiff + Ispec; gl_FragColor = finalColor; } [/CODE]
  9. [quote name='dpadam450' timestamp='1341507089' post='4956025'] direction_move should be calculated by using the sin/cos of the camera angle. Looks like you have a cross product calculating it? [/quote] I'm calculating the strafe vector using the cross product of the foward (Z) direction and the up (Y) direction. So, the strafe vector is basically 90 degrees from the forward vector. I then add this to the translation matrix when the user presses one of the WSAD move keys, and then multiply the modelview matrix with it before rendering the scene. I'm using a rotation matrix which contains the rotation angle when the user rotates the camera view left/right on the Y/up axis. All of this works perfectly for either movement or rotation only, but I can't get the two working together! [img]http://public.gamedev.net//public/style_emoticons/default/sad.png[/img]
  10. Okay, I can grab both the camera's current position vector and direction (the direction it's pointing) vector from the modelview matrix. Assuming I have this data on each loop iteration, how do I get the camera to rotate around its current position as opposed to 0,0,0? I have been reading a lot of tutorials that say you have to rotate the direction vector by the rotation matrix used to rotate the scene so when you apply the translation, it goes in the correct direction. I think I understand that part perfectly well, as I'm able to derive the forward/back direction vector from the modelview matrix, normalize it, and add it to the translation matrix along with the speed value. I do the same for the strafe vector, which is the cross product of the fwd/back vector and the Y direction (currently -1.0). Example: [CODE] //calculate strafe vector using cross product of Z and Y vec4 direction_strafe; vec4_cross(direction_strafe,direction_move,(vec4){0.0,-1.0,0.0,0.0}); camera_position[z] += (direction_move[z] * speed); //WS keys (fwd/back) camera_position[x] += (direction_strafe[x] * speed); //AD keys (strafe) //construct translation matrix mat4x4 translate; mat4x4_translate(translate, camera_position[x], 0.0, camera_position[z]); //Y is 0.0 since we never go up/down [/CODE] I then multiply the translation matrix by the rotation matrix and drop the result into the modelview matrix. Problem is, when I rotate the scene, it is always rotating around 0,0,0 so when I move around, the camera always rotates around the world's origin and not its own.
  11. [quote name='Goran Milovanovic' timestamp='1341268329' post='4955084'] "Forward" should modify a direction vector in local coordinates, which can then be transformed by the rotation matrix to generate the approprite translation vector. [/quote] Thanks. I tried that, but now the scene only rotates around the origin (0,0,0). Actually, this gives the same result as pre-multiplying the rotation/translation matrices. Let me just confirm: the "eye point" is the X,Y (assuming Z-up) coordinates of the camera, looking down, correct? And the "look at" point, or the camera's orientation, is the eye point vector multiplied by the rotation matrix, yes? I'm sorry I can't be more helpful. I am really having a hard time grasping this.
  12. I've been struggling with getting the matrix math down for translating and rotating a camera through 3D space. I've got an OpenGL program that places the camera in the middle of a box at 0,0,0. I want to be able to both translate and rotate (first person shooter-style) through the box using the keyboard. I've got it working except I can't translate/rotate and keep the coordinates the same for both (only one or the other). For example, I can move through the box just fine (using the WSAD keys), but if I rotate my view to the left by 90 degrees, "forward" now goes to the right. The problem is, I multiply the modelview matrix by the translation matrix first, and then by the rotation matrix. This works except that by rotating the scene 90 degrees, it throws the translation coordinates off by 90 degrees! In other words, translation occurs under the assumption that the scene has [i]not been rotated.[/i] I only need to rotate the scene on the Y axis, Wolfenstein/Doom style. Here is my code so far: [CODE] mat4x4 mat_model,mat_tran,mat_rot,mat_temp; mat4x4_identity(mat_model); mat4x4_identity(mat_tran); mat4x4_identity(mat_rot); mat4x4_identity(mat_temp); //create translation matrix mat4x4_translate(mat_tran, strafe, 0.0, dolly); //create rotation matrix mat4x4_rotate_Y(mat_rot,mat_temp,-rot_y); //apply the matrices to the modelview matrix mat4x4_mul(mat_temp,mat_tran,mat_rot); mat4x4_dup(mat_model,mat_temp); [/CODE] What am I missing?
  13. [quote name='larspensjo' timestamp='1340883327' post='4953628'] Are you sure about that? I have many shaders where gl_Position is not the last thing. Why does it have to be last? [/quote] Yep, I just verified this. This is with a device running a PowerVR SGX 530 GPU. Now I can't say for sure or not whether other things in the code have any influence over this. I just know that if I write anything after gl_Position, the data for the vertices and normals gets swapped (i.e. OpenGL thinks vertex data is normal data and vice versa). That's without changing anything else, just literally moving up/down two lines of code and recompiling. I indeed [i]did not[/i] have to put gl_Position last when the vetex shader was running on an Nvidia chip. It didn't seem to care. I also changed my code from using separate buffers to a single OpenGL buffer to hold both vertices and normals, but that didn't seem to affect this issue.
  14. [quote name='Synthetix' timestamp='1339741288' post='4949454'] I figured it out! The problem was the normals needed to precede the vertices when specifying the attribute arrays. So, order matters. [/quote] I just thought I'd post an update to this. The order the attribute variables wasn't actually the problem. The problem was with the vertex shader. I didn't realize this until running it on a device with a PowerVR SGX 530 GPU (the previous GPU was an Nvidia one). The problem was that gl_Position was being written too early in the shader. It should be the [i]last thing written in the shader.[/i] Although some drivers may work when gl_Position is written in the middle of the vertex shader, others may fail. This behavior seems to be driver-specific. Here is an updated version that works with every device I've tried: [CODE] //vertex shader attribute vec3 v_position; attribute vec3 v_normal; varying float lightIntensity; uniform mat4 model; //uniform mat4 view; uniform mat4 proj; void main() { //specify direction of light vec3 light_dir = vec3(0.9,0.8,-3.0); vec4 newNormal = proj * model * vec4(v_normal,0.0); lightIntensity = max(0.0, dot(newNormal.xyz, light_dir)); //gl_Position must come LAST! vec4 newPosition = proj * model * vec4(v_position,1.0); gl_Position = newPosition; } [/CODE]
  15. I figured it out! The problem was the normals needed to precede the vertices when specifying the attribute arrays. So, order matters. Here are the updated portions of the code: [CODE] //grab locations of attribute vars //array of normals must come before vertices! glEnableVertexAttribArray(0); //normals glBindAttribLocation(prog, 0, "v_normals"); glEnableVertexAttribArray(1); //vertices glBindAttribLocation(prog, 1, "v_position"); [/CODE] And also here: [CODE] //normals glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); //vertices glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); [/CODE]