moldyviolinist
Member
Content count
13 
Joined

Last visited
Community Reputation
192 NeutralAbout moldyviolinist

Rank
Member

[C++] First person mouse look camera controller for variable gravity, orientation
moldyviolinist replied to moldyviolinist's topic in Math and Physics
Solved it. See http://gamedev.stackexchange.com/questions/73588/howdoifixmyplanetfacingcamera for a pretty good explanation. I'll post the code here. glm::mat4 trans; float factor = 1.0f; float real_vertical = vertical; m_horizontal += horizontal; m_vertical += vertical; while (m_horizontal > TWO_PI) { m_horizontal = TWO_PI; } while (m_horizontal < TWO_PI) { m_horizontal += TWO_PI; } if (m_vertical > MAX_VERTICAL) { m_vertical = MAX_VERTICAL; } else if (m_vertical < MAX_VERTICAL) { m_vertical = MAX_VERTICAL; } glm::quat world_axes_rotation = glm::angleAxis(m_horizontal * ONEEIGHTY_PI, glm::vec3(0.0f, 1.0f, 0.0f)); world_axes_rotation = glm::normalize(world_axes_rotation); world_axes_rotation = glm::rotate(world_axes_rotation, m_vertical * ONEEIGHTY_PI, glm::vec3(1.0f, 0.0f, 0.0f)); m_pole = glm::normalize(m_pole  glm::dot(m_orientation, m_pole) * m_orientation); glm::mat4 local_transform; local_transform[0] = glm::vec4(m_pole.x, m_pole.y, m_pole.z, 0.0f); local_transform[1] = glm::vec4(m_orientation.x, m_orientation.y, m_orientation.z, 0.0f); glm::vec3 tmp = glm::cross(m_pole, m_orientation); local_transform[2] = glm::vec4(tmp.x, tmp.y, tmp.z, 0.0f); local_transform[3] = glm::vec4(m_position.x, m_position.y, m_position.z, 1.0f); world_axes_rotation = glm::normalize(world_axes_rotation); m_view = local_transform * glm::mat4_cast(world_axes_rotation); m_direction = 1.0f * glm::vec3(m_view[2]); m_up = glm::vec3(m_view[1]); m_right = glm::vec3(m_view[0]); m_view = glm::inverse(m_view); 
Collision detection and walking around environment
moldyviolinist replied to newtechnology's topic in Math and Physics
I think raycasting is probably the best way to do collision detection with terrain or ground meshes. Bullet has a builtin ray testing function (a member of the btCollisionWorld) that should be quite optimized. 
[C++] First person mouse look camera controller for variable gravity, orientation
moldyviolinist replied to moldyviolinist's topic in Math and Physics
I'm going to bump this with my most recent attemps. I still haven't solved this issue. I'll post all the code I've tried. There are comments above each block that indicate the issue with that particular implementation. Surely someone has implemented this type of camera at some point? I would be happy to basically copy someone else's implementation for this rather than attempt to fix my own work, if necessary. void Camera::set_angles_advanced(float horizontal, float vertical) { glm::mat4 trans; float factor = 1.0f; float real_vertical = vertical; m_horizontal += horizontal; m_vertical += vertical; while (m_horizontal > TWO_PI) { m_horizontal = TWO_PI; } while (m_horizontal < TWO_PI) { m_horizontal += TWO_PI; } if (m_vertical > MAX_VERTICAL) { vertical = m_vertical  MAX_VERTICAL; if (vertical < 0) { vertical = 0; } m_vertical = MAX_VERTICAL; } else if (m_vertical < MAX_VERTICAL) { vertical = m_vertical  MAX_VERTICAL; if (vertical > 0) { vertical = 0; } m_vertical = MAX_VERTICAL; } //  south pole rotation /*glm::quat rotation; if (m_orientation != glm::vec3(0.0f, 1.0f, 0.0f)) { glm::vec3 axis = glm::normalize(glm::cross(glm::vec3(0.0f, 1.0f, 0.0f), m_orientation)); rotation = glm::rotate(rotation, acosf(m_orientation.y) * ONEEIGHTY_PI, axis); } rotation = glm::rotate(rotation, m_horizontal * ONEEIGHTY_PI, glm::vec3(0.0f, 1.0f, 0.0f)); rotation = glm::rotate(rotation, m_vertical * ONEEIGHTY_PI, glm::vec3(1.0f, 0.0f, 0.0f)); m_direction = glm::vec3(rotation * glm::vec4(0.0f, 0.0f, 1.0f, 0.0f));*/ //  south pole rotation /*glm::vec3 tmp = m_orientation; float look_factor = 1.0f; float addition = 0.0f; if (tmp.y < 0.0f) { tmp.y *= 1.0f; look_factor = 1.0f; addition = 180.0f; } glm::mat4 yaw = glm::rotate(glm::mat4(), m_horizontal * ONEEIGHTY_PI, m_orientation); glm::mat4 pitch = glm::rotate(glm::mat4(), m_vertical * ONEEIGHTY_PI, glm::vec3(1.0f, 0.0f, 0.0f)); if (tmp != glm::vec3(0.0f, 1.0f, 0.0f)) { glm::vec3 axis = glm::normalize(glm::cross(glm::vec3(0.0f, 1.0f, 0.0f), tmp)); pitch = glm::rotate(glm::mat4(), acosf(tmp.y) * ONEEIGHTY_PI * look_factor + addition, axis) * pitch; } glm::mat4 cam = yaw * pitch; m_direction = glm::vec3(cam[2]);*/ //  oscillation when looking close to vertical, vertical range capped /*glm::mat4 yaw_matrix = glm::rotate(glm::mat4(), m_horizontal * ONEEIGHTY_PI, m_orientation); m_right = glm::cross(m_direction, m_orientation); glm::mat4 pitch_matrix = glm::rotate(glm::mat4(), m_vertical * ONEEIGHTY_PI, glm::normalize(m_right)); glm::mat4 camera_matrix = pitch_matrix * yaw_matrix; m_direction = glm::vec3(camera_matrix[2]);*/ //  oscillation when looking close to vertical, vertical range always capped to 90,90 /*glm::mat4 yaw = glm::rotate(glm::mat4(), m_horizontal * ONEEIGHTY_PI, m_orientation); glm::mat4 pitch = glm::rotate(glm::mat4(), m_vertical * ONEEIGHTY_PI, m_right); glm::mat4 cam = pitch * yaw; m_right = glm::vec3(cam[0]); m_up = glm::vec3(cam[1]); m_direction = glm::vec3(cam[2]);*/ //  south pole rotation /*glm::dvec3 dir = glm::dvec3(cos(m_vertical) * sin(m_horizontal), sin(m_vertical), cos(m_vertical) * cos(m_horizontal)); glm::vec3 tmp = m_orientation; tmp.y = fabs(tmp.y); glm::dmat4 dtrans; float angle; if (glm_sq_distance(tmp, glm::vec3(0.0f, 1.0f, 0.0f)) > 0.001f) { glm::vec3 axis = glm::normalize(glm::cross(glm::vec3(0.0f, 1.0, 0.0f), m_orientation)); angle = acos(m_orientation.y) * ONEEIGHTY_PI; dtrans = glm::rotate(glm::mat4(), angle, axis); } else if (m_orientation.y < 0.0f) { factor = 1.0f; } dir = glm::dvec3(dtrans * glm::dvec4(dir.x, dir.y, dir.z, 0.0f)); m_direction = glm::vec3(dir);*/ m_dir_horizontal_norm = glm::normalize(m_direction  glm_project(m_direction, m_orientation)); m_view = glm::lookAt(m_position, m_position + m_direction, m_orientation); m_vp = m_perspective * m_view; } 
[C++] First person mouse look camera controller for variable gravity, orientation
moldyviolinist replied to moldyviolinist's topic in Math and Physics
Well I tried this out. And it did fix the original problem I had, so it was clearly the weird transformation I was using. However, there are two other problems. The first is that the look direction is not automatically adjusted as the orientation changes. So the direction stays fixed in place as you move over the planets surface, which is definitely awkward. The direction needs to stay the same relative to the ground as the orientation changes. The second problem is that attempting to cap the vertical angle at which you can look seems to cap to the same section despite the orientation. Well, it's weird, if you move along and don't move the mouse much, then the look direction is capped between 90 and 90 of the original orientation (0, 1, 0), which is obviously a problem. However, putting the vertical angle at max and moving horizontally a lot seems to fix the issue, and then the cap is reset to the current orientation. Very peculiar. I can't quite imagine why that would be happening... Here's the code I used, maybe someone can spot the issues causing the two problems I mentioned. void Camera::set_angles_advanced(float horizontal, float vertical) { glm::mat4 trans; float factor = 1.0f; m_horizontal += horizontal; m_vertical += vertical; while (m_horizontal > TWO_PI) { m_horizontal = TWO_PI; } while (m_horizontal < TWO_PI) { m_horizontal += TWO_PI; } if (m_vertical > MAX_VERTICAL) { vertical = m_vertical  MAX_VERTICAL; if (vertical < 0) { vertical = 0; } m_vertical = MAX_VERTICAL; } else if (m_vertical < MAX_VERTICAL) { vertical = m_vertical  MAX_VERTICAL; if (vertical > 0) { vertical = 0; } m_vertical = MAX_VERTICAL; } glm::mat4 pitch = glm::rotate(glm::mat4(), vertical * ONEEIGHTY_PI, glm::normalize(m_right)); glm::mat4 yaw = glm::rotate(glm::mat4(), horizontal * ONEEIGHTY_PI, m_orientation); m_camera = pitch * yaw * m_camera; m_direction = glm::vec3(m_camera[2]); m_view = glm::lookAt(m_position, m_position + m_direction, m_orientation); m_vp = m_perspective * m_view; m_dir_norm = glm::normalize(m_direction); m_dir_horizontal_norm = glm::normalize(m_direction  glm_project(m_direction, m_orientation)); m_right = glm::cross(m_direction, m_orientation); m_right_horizontal_norm = glm::normalize(m_right); } Thanks in advance everyone! 
[C++] First person mouse look camera controller for variable gravity, orientation
moldyviolinist replied to moldyviolinist's topic in Math and Physics
Thanks for the suggestions. I did attempt a matrixonly implementation at one point, but was unsuccessful. It's entirely possible that I had some mistakes in there, and your solution is pretty clean, so I will give that a try. I'll report back after work. One question I have is: how OK is it to use the previous frame's right vector? I'm sure it works, but I would prefer an uptodate right vector, is there a good way to calculate that? I ended up completely avoiding a right vector in my code. And in your pitch and yaw angle method, you're using a right vector of vec3(1,0,0), which surely isn't correct for different orientations. 
[C++] First person mouse look camera controller for variable gravity, orientation
moldyviolinist posted a topic in Math and Physics
Hi, I'm attempting to implement a first person, mousecontrolled camera for variable orientation situations. This means basically I need a regular mouse look camera to behave normally with any "up" vector. This will be used for moving around the entire surface of a spherical planet. So as you walk along, the view direction stays the same relative to the ground, so the view doesn't stay fixed on one point as you change gravity vectors. Pretty basic right? Well I've got the necessary code down, and it works perfectly in most situations. However, there is a big problem with it. Near the south pole (basically near gravity=(0, 1, 0)), the direction vector seems to be transformed toward the south pole as the gravity/orientation changes. So if you're standing still, a few units away from the south pole, the direction is fine. Try and move in any direction, and the view direction is shifted toward the south pole. It's extremely odd. It seems to me that the transformation matrix for transforming the worldaxisbased mouse movements is somehow affecting the direction vector to shift it to point toward the south pole. Basically, I think the transformation I'm using (a rotation from the original up vector(0, 1, 0) and the current orientation) is forcing the direction to be "in line" with that, hence pointing it toward the south pole. But the transformation seems to work fine for most other orientations. I would upload a video, but the recording really doesn't capture the problem, because it looks like the mouse is just being moved toward the south pole. Also, only the horizontal component gets messed up, the vertical component stays level. Anyway, here's the code. I would really appreciate some guidance from someone who's got a properly working system. Here's the implementation with just using sines and cosines to calculate direction vector from mouse angles. glm::mat4 trans; float factor = 1.0f; m_horizontal += horizontal; m_vertical += vertical; while (m_horizontal > TWO_PI) { m_horizontal = TWO_PI; } while (m_horizontal < TWO_PI) { m_horizontal += TWO_PI; } if (m_vertical > MAX_VERTICAL) { vertical = MAX_VERTICAL  (m_vertical  vertical); m_vertical = MAX_VERTICAL; } else if (m_vertical < MAX_VERTICAL) { vertical = MAX_VERTICAL  (m_vertical  vertical); m_vertical = MAX_VERTICAL; glm::vec3 tmp = m_orientation; tmp.y = fabs(tmp.y); /* this check is to prevent glm abort on cross product of parallel vectors */ /* factor=1.0f only extremely close to south pole. Problem occurs much outside of that region */ if (glm_sq_distance(tmp, glm::vec3(0.0f, 1.0f, 0.0f)) > 0.001f) { glm::vec3 rot = glm::normalize(glm::cross(glm::vec3(0.0f, 1.0f, 0.0f), m_orientation)); float angle = (acosf(m_orientation.y) * 180.0f) * PI_RECIPROCAL; glm::quat t = glm::angleAxis(angle, rot); trans = glm::mat4_cast(t); } else if (m_orientation.y < 0.0f) { factor = 1.0f; } tmp = glm::vec3(cos(m_vertical) * sin(m_horizontal), sin(m_vertical), cos(m_vertical) * cos(m_horizontal)) * factor; m_up = m_orientation; m_direction = glm::vec3(trans * glm::vec4(tmp.x, tmp.y, tmp.z, 0.0f)); m_view = glm::lookAt(m_position, m_position + m_direction, m_up); m_vp = m_perspective * m_view; I also have a quaternion implementation, but it's a little more prone to glm aborts (anyone have an elegant solution for those, btw?). Both the quaternion and regular angle one behave identically. glm::mat4 trans; float factor = 1.0f; m_horizontal += horizontal; m_vertical += vertical; while (m_horizontal > TWO_PI) { m_horizontal = TWO_PI; } while (m_horizontal < TWO_PI) { m_horizontal += TWO_PI; } if (m_vertical > MAX_VERTICAL) { vertical = MAX_VERTICAL  (m_vertical  vertical); m_vertical = MAX_VERTICAL; } else if (m_vertical < MAX_VERTICAL) { vertical = MAX_VERTICAL  (m_vertical  vertical); m_vertical = MAX_VERTICAL; } glm::quat t, quat; glm::vec3 tmp = m_orientation; tmp.y = fabs(tmp.y); if (glm_sq_distance(tmp, glm::vec3(0.0f, 1.0f, 0.0f)) > 0.001f) { glm::vec3 axis = glm::normalize(glm::cross(glm::vec3(0.0f, 1.0f, 0.0f), m_orientation)); float angle = (acosf(m_orientation.y) * 180.0f) * PI_RECIPROCAL; t = glm::angleAxis(angle, axis); } else if (m_orientation.y < 0.0f) { factor = 1.0f; } glm::quat rot = glm::angleAxis(m_horizontal * ONEEIGHTY_PI, glm::vec3(0.0f, 1.0f, 0.0f)); quat = rot * quat; rot = glm::angleAxis(m_vertical * ONEEIGHTY_PI, glm::vec3(1.0f, 0.0f, 0.0f)); quat = quat * rot; t = t * quat; trans = glm::mat4_cast(t); m_direction = glm::vec3(trans[2]); Thanks in advance for the help. 
Help with Stencil Test and Deferred Shading [SOLVED]
moldyviolinist posted a topic in Graphics and GPU Programming
I'm having real trouble getting my stencil test to work correctly with a basic deferred shading implementation. I'm aware there are other options, but I really need to get the stencil test working because I plan to try and use stencil volume shadows. I've done pretty much this exact implementation before, and it worked fine, so I'm really at a loss for what the problem is. In any case, the real problem is that the stencil test doesn't seem to properly take account of the first pass depth buffer, even though depth test is on during the stencil pass. In addition, the stencil test seems corrupted when it's behind the geometry. Three screenshots below and then stencil and depth buffer grab. [attachment=19744:stencil test ex 1.png] In this first image, the light volume that I'm rendering to the stencil buffer and then again in the light pass is behind terrain geometry. So why is it visible? [attachment=19745:stencil test ex 2.png] This second image looks alright from the other side, although you can see underneath the geometry a bit again. [attachment=19746:stencil test ex 3.png] The third image shows the output of the stencil past, so clearly the geometry is ok. [attachment=19747:FrameStencil_0021_0099_Pre.png] [attachment=19748:FrameDepth_0021_0099_Pre.png] Sorry they're so small. It's too slow to capture with any larger framebuffer size. Here's the rendering code: // geometry pass m_gbuffer>bindWrite(); // binds fbo m_gbuffer>setDrawBuffers(0, m_num_drawbuffers); // glDrawBuffers(num_buffers, buffers) glEnable(GL_DEPTH_TEST); glDepthMask(GL_TRUE); glClear(GL_COLOR_BUFFER_BIT  GL_DEPTH_BUFFER_BIT  GL_STENCIL_BUFFER_BIT); glEnable(GL_CULL_FACE); updateCameraInformation(m_camera_information); world>geometryRender(m_camera_information); glDepthMask(GL_FALSE); // light pass glDrawBuffer(GL_COLOR_ATTACHMENT0 + m_num_drawbuffers); // same fbo, so same depth buffer, but different texture for light pass m_gbuffer>bindToTextures(); // bind fbo textures for shaders glEnable(GL_STENCIL_TEST); glClear(GL_COLOR_BUFFER_BIT  GL_DEPTH_BUFFER_BIT  GL_STENCIL_BUFFER_BIT); // stencil pass glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glEnable(GL_DEPTH_TEST); glDisable(GL_CULL_FACE); glStencilFunc(GL_ALWAYS, 0, 0); glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR, GL_KEEP); glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR, GL_KEEP); m_point_light.render(); glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); // light pass glStencilFunc(GL_NOTEQUAL, 0, 0xFF); glDisable(GL_DEPTH_TEST); glEnable(GL_BLEND); glBlendEquation(GL_FUNC_ADD); glBlendFunc(GL_ONE, GL_ONE); glEnable(GL_CULL_FACE); glCullFace(GL_FRONT); m_point_light.render(); glCullFace(GL_BACK); glDisable(GL_STENCIL_TEST); glDisable(GL_CULL_FACE); m_ambient_light.render(); glDisable(GL_BLEND); Thanks for your time. I appreciate any suggestions. EDIT: I swear, posting on these forums just makes me think differently and I solve it easily. The glStencilOpSeparate needs to be set to GL_INCR_WRAP or GL_DECR_WRAP respectively. The WRAP portion means that decrementing at 0 will wrap the buffer to 255, and incrementing at 255 will wrap the buffer to 0. Very useful, yet my previous code that worked did not use WRAP. It must be somewhat platform dependent. Hope this can help someone. glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP, GL_KEEP); glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP, GL_KEEP); 
Framebuffer depth test with depth and stencil buffer
moldyviolinist replied to moldyviolinist's topic in Graphics and GPU Programming
OK, well it looks like I solved it. It seem that the order of operations of glClear() and glDepthMask(GL_TRUE) actually really matters. Depth mask needs to be true BEFORE you clear. It makes sense, because it allows writing the depth buffer. If it's false when you clear, you won't clear the depth buffer. I was trying to have the depth mask true for my geometry pass, but false for light and stencil passes. Without realizing the implications I just put the depth mask true after the clear. Hopefully this will help anyone with a similar issue. Mods can close the thread I guess? 
Framebuffer depth test with depth and stencil buffer
moldyviolinist replied to moldyviolinist's topic in Graphics and GPU Programming
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, m_width, m_height, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, NULL); This actually fixes the depth test problems, but somehow introduces buffer clearing issues. I'm not sure how that's possible, since everything was clearing fine before. 
Framebuffer depth test with depth and stencil buffer
moldyviolinist replied to moldyviolinist's topic in Graphics and GPU Programming
Thanks for your suggestions. I tried switching to depth24 stencil 8, but that didn't change anything. I also tried switching to binding the texture to both attachments, but that also didn't make any changes. I'll have to keep trying things. Is there a way to just create and bind depth and stencil completely separately? That way I can use the stencilless depth to correctly depth test, and still use the stencil buffer for deferred shading. 
Framebuffer depth test with depth and stencil buffer
moldyviolinist posted a topic in Graphics and GPU Programming
I'm attempting to implement deferred shading. I've got a number of problems with it, but first and foremost, depth testing is not working right. In my deferred shading, I create a depth buffer, naturally. Depth testing works fine with this. But if I create a depth and stencil buffer, depth testing just doesn't work. It's peculiar. Buffer creation, depth component only, depth testing works: glBindTexture(GL_TEXTURE_2D, depth_map); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth_map, 0); Buffer creation, depth and stencil components, depth testing doesn't work: glBindTexture(GL_TEXTURE_2D, depth_map); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH32F_STENCIL8, width, height, 0, GL_DEPTH_STENCIL, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depth_map, 0); This is literally the only change to my code, and depth testing works for the first one, and fails for the second one. Stencil test is disabled, depth test enabled for both. Any suggestions to fix this problem? I found a thread several years old, but only somewhat related, suggesting this was an AMD driver problem, and indeed I have an AMD card. Hopefully I'm just doing something wrong. Thanks. 
OpenGL Strange lighting issues; difference between SFML and GLFW
moldyviolinist replied to moldyviolinist's topic in Graphics and GPU Programming
Do glVertexAttribPointer() and glBindBuffer() need to be called every frame then? Because I was binding it correctly once, but the normals only showed up once I called both of those every frame. 
OpenGL Strange lighting issues; difference between SFML and GLFW
moldyviolinist posted a topic in Graphics and GPU Programming
I recently got into OpenGL, trying to make a simple survival/exploration game just for fun. I'm using C++ with GLEW. I started off using SFML 2.0 for context creation, since it was pretty well documented and had the necessary features. However, I ran into some problems using SFML, and I switched to GLFW. At the same time I decided to organize my code and make it object oriented. I now have some issues getting some simple Phong lighting to work correctly. Using the exact same GLSL shader code, I receive completely different lighting results using SFML and GLFW. Since the project using GLFW has been reorganized, I thought I may have messed something up somewhere, but I just can't find anything. I've tried using a number of different lighting algorithms and variations. Here are screenshots comparing the same scene. GLFW: [attachment=14390:glfw.png], SFML: [attachment=14391:sfml.png] Apart from the GLFW scene being significantly darker, the light just doesn't spread as much. There's also a weird pure white spotlight directly underneath the light. Finally, the SFML scene has hard edges, which is actually what I want for this game. With nonsmoothed normals, the edges should always appear, right? I attached all the source files for the two projects, including the shaders used. I included the code blocks project files in case you'd like to test the lighting yourself. Really appreciate any help. Thanks Edit 5:04: OK, well I had the idea to basically use color information to display the normal values. Needless to say for GLFW version, they are horribly wrong. It's clear that my program is passing the shader bad normal values. I'll need to look more closely at that aspect of my program. Edit 5:23: The normal buffer was somehow not bound when I called glDrawArrays. The change that made the difference was calling glBindBuffers on the normal buffer just before glDrawArrays. Can someone clarify the order of operations on binding buffers? I've seen all sorts of different ways to do it in tutorials. I wasn't binding the buffers on the uv and vertex buffers every frame, but they were obviously being passed to the shaders fine. So why was the normal buffer NOT bound?