Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


moldyviolinist

Member Since 21 Mar 2013
Offline Last Active Aug 15 2014 08:52 AM

Topics I've Started

[C++] First person mouse look camera controller for variable gravity, orientation

13 April 2014 - 09:00 PM

Hi, I'm attempting to implement a first person, mouse-controlled camera for variable orientation situations. This means basically I need a regular mouse look camera to behave normally with any "up" vector. This will be used for moving around the entire surface of a spherical planet. So as you walk along, the view direction stays the same relative to the ground, so the view doesn't stay fixed on one point as you change gravity vectors. Pretty basic right? 

 

Well I've got the necessary code down, and it works perfectly in most situations. However, there is a big problem with it. Near the south pole (basically near gravity=(0, -1, 0)), the direction vector seems to be transformed toward the south pole as the gravity/orientation changes. So if you're standing still, a few units away from the south pole, the direction is fine. Try and move in any direction, and the view direction is shifted toward the south pole. It's extremely odd.

 

It seems to me that the transformation matrix for transforming the world-axis-based mouse movements is somehow affecting the direction vector to shift it to point toward the south pole. Basically, I think the transformation I'm using (a rotation from the original up vector(0, 1, 0) and the current orientation) is forcing the direction to be "in line" with that, hence pointing it toward the south pole. But the transformation seems to work fine for most other orientations. 

 

I would upload a video, but the recording really doesn't capture the problem, because it looks like the mouse is just being moved toward the south pole. Also, only the horizontal component gets messed up, the vertical component stays level.

 

Anyway, here's the code. I would really appreciate some guidance from someone who's got a properly working system. 

 

Here's the implementation with just using sines and cosines to calculate direction vector from mouse angles.

        glm::mat4 trans;
	float factor = 1.0f;
	m_horizontal += horizontal;
	m_vertical += vertical;

	while (m_horizontal > TWO_PI) {
		m_horizontal -= TWO_PI;
	}

	while (m_horizontal < -TWO_PI) {
		m_horizontal += TWO_PI;
	}

	if (m_vertical > MAX_VERTICAL) {
		vertical = MAX_VERTICAL - (m_vertical - vertical);
		m_vertical = MAX_VERTICAL;
	}
	else if (m_vertical < -MAX_VERTICAL) {
		vertical = -MAX_VERTICAL - (m_vertical - vertical);
		m_vertical = -MAX_VERTICAL;

        glm::vec3 tmp = m_orientation;
	tmp.y = fabs(tmp.y);

/* this check is to prevent glm abort on cross product of parallel vectors */
/* factor=-1.0f only extremely close to south pole. Problem occurs much outside of that region */
	if (glm_sq_distance(tmp, glm::vec3(0.0f, 1.0f, 0.0f)) > 0.001f) {
		glm::vec3 rot = glm::normalize(glm::cross(glm::vec3(0.0f, 1.0f, 0.0f), m_orientation));
		float angle = (acosf(m_orientation.y) * 180.0f) * PI_RECIPROCAL;
		glm::quat t = glm::angleAxis(angle, rot);
		trans = glm::mat4_cast(t);
	}
	else if (m_orientation.y < 0.0f) {
		factor = -1.0f;
	}

	tmp = glm::vec3(cos(m_vertical) * sin(m_horizontal),
			sin(m_vertical),
			cos(m_vertical) * cos(m_horizontal)) * factor;

	m_up = m_orientation;

	m_direction = glm::vec3(trans * glm::vec4(tmp.x, tmp.y, tmp.z, 0.0f));

	m_view = glm::lookAt(m_position, m_position + m_direction, m_up);
	m_vp = m_perspective * m_view;

I also have a quaternion implementation, but it's a little more prone to glm aborts (anyone have an elegant solution for those, btw?). Both the quaternion and regular angle one behave identically. 

        glm::mat4 trans;
	float factor = 1.0f;
	m_horizontal += horizontal;
	m_vertical += vertical;

	while (m_horizontal > TWO_PI) {
		m_horizontal -= TWO_PI;
	}

	while (m_horizontal < -TWO_PI) {
		m_horizontal += TWO_PI;
	}

	if (m_vertical > MAX_VERTICAL) {
		vertical = MAX_VERTICAL - (m_vertical - vertical);
		m_vertical = MAX_VERTICAL;
	}
	else if (m_vertical < -MAX_VERTICAL) {
		vertical = -MAX_VERTICAL - (m_vertical - vertical);
		m_vertical = -MAX_VERTICAL;
	}

        glm::quat t, quat;

	glm::vec3 tmp = m_orientation;
	tmp.y = fabs(tmp.y);

	if (glm_sq_distance(tmp, glm::vec3(0.0f, 1.0f, 0.0f)) > 0.001f) {
		glm::vec3 axis = glm::normalize(glm::cross(glm::vec3(0.0f, 1.0f, 0.0f), m_orientation));
		float angle = (acosf(m_orientation.y) * 180.0f) * PI_RECIPROCAL;
		t = glm::angleAxis(angle, axis);
	}
	else if (m_orientation.y < 0.0f) {
		factor = -1.0f;
	}

	glm::quat rot = glm::angleAxis(m_horizontal * ONEEIGHTY_PI, glm::vec3(0.0f, 1.0f, 0.0f));
	quat = rot * quat;

	rot = glm::angleAxis(m_vertical * -ONEEIGHTY_PI, glm::vec3(1.0f, 0.0f, 0.0f));
	quat = quat * rot;

	t = t * quat;

	trans = glm::mat4_cast(t);
	m_direction = glm::vec3(trans[2]);

Thanks in advance for the help.


Help with Stencil Test and Deferred Shading [SOLVED]

02 February 2014 - 02:50 PM

I'm having real trouble getting my stencil test to work correctly with a basic deferred shading implementation. I'm aware there are other options, but I really need to get the stencil test working because I plan to try and use stencil volume shadows. I've done pretty much this exact implementation before, and it worked fine, so I'm really at a loss for what the problem is.
 
In any case, the real problem is that the stencil test doesn't seem to properly take account of the first pass depth buffer, even though depth test is on during the stencil pass. In addition, the stencil test seems corrupted when it's behind the geometry. Three screenshots below and then stencil and depth buffer grab.
 
Attached File  stencil test ex 1.png   156.33KB   1 downloads
In this first image, the light volume that I'm rendering to the stencil buffer and then again in the light pass is behind terrain geometry. So why is it visible? 
 
Attached File  stencil test ex 2.png   246.29KB   3 downloads
This second image looks alright from the other side, although you can see underneath the geometry a bit again.

 

Attached File  stencil test ex 3.png   198.9KB   1 downloads

The third image shows the output of the stencil past, so clearly the geometry is ok. 

 

Attached File  FrameStencil_0021_0099_Pre.png   1.34KB   1 downloads Attached File  FrameDepth_0021_0099_Pre.png   1.7KB   3 downloads
Sorry they're so small. It's too slow to capture with any larger framebuffer size.

 

Here's the rendering code:

// geometry pass
m_gbuffer->bindWrite();    // binds fbo
m_gbuffer->setDrawBuffers(0, m_num_drawbuffers); // glDrawBuffers(num_buffers, buffers)

glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);

glEnable(GL_CULL_FACE);

updateCameraInformation(m_camera_information);

world->geometryRender(m_camera_information);

glDepthMask(GL_FALSE);


// light pass
glDrawBuffer(GL_COLOR_ATTACHMENT0 + m_num_drawbuffers);   // same fbo, so same depth buffer, but different texture for light pass
m_gbuffer->bindToTextures();    // bind fbo textures for shaders

glEnable(GL_STENCIL_TEST);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);

// stencil pass
        glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);

	glEnable(GL_DEPTH_TEST);
	glDisable(GL_CULL_FACE);

	glStencilFunc(GL_ALWAYS, 0, 0);
	glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR, GL_KEEP);
	glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR, GL_KEEP);

	m_point_light.render();

	glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);

// light pass
        glStencilFunc(GL_NOTEQUAL, 0, 0xFF);

	glDisable(GL_DEPTH_TEST);
	
	glEnable(GL_BLEND);
	glBlendEquation(GL_FUNC_ADD);
	glBlendFunc(GL_ONE, GL_ONE);

	glEnable(GL_CULL_FACE);
	glCullFace(GL_FRONT);

	m_point_light.render();

	glCullFace(GL_BACK);

glDisable(GL_STENCIL_TEST);
glDisable(GL_CULL_FACE);

m_ambient_light.render();

glDisable(GL_BLEND);

Thanks for your time. I appreciate any suggestions.

 

 

EDIT:

 

I swear, posting on these forums just makes me think differently and I solve it easily. The glStencilOpSeparate needs to be set to GL_INCR_WRAP or GL_DECR_WRAP respectively. The WRAP portion means that decrementing at 0 will wrap the buffer to 255, and incrementing at 255 will wrap the buffer to 0. Very useful, yet my previous code that worked did not use WRAP. It must be somewhat platform dependent. Hope this can help someone.

glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP, GL_KEEP);
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP, GL_KEEP);

Framebuffer depth test with depth and stencil buffer

29 July 2013 - 11:45 AM

I'm attempting to implement deferred shading. I've got a number of problems with it, but first and foremost, depth testing is not working right.

 

In my deferred shading, I create a depth buffer, naturally. Depth testing works fine with this. But if I create a depth and stencil buffer, depth testing just doesn't work. It's peculiar.

 

Buffer creation, depth component only, depth testing works:

 

glBindTexture(GL_TEXTURE_2D, depth_map);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth_map, 0);
 
Buffer creation, depth and stencil components, depth testing doesn't work:
 
glBindTexture(GL_TEXTURE_2D, depth_map);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH32F_STENCIL8, width, height, 0, GL_DEPTH_STENCIL, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depth_map, 0);
 
This is literally the only change to my code, and depth testing works for the first one, and fails for the second one. Stencil test is disabled, depth test enabled for both. Any suggestions to fix this problem? I found a thread several years old, but only somewhat related, suggesting this was an AMD driver problem, and indeed I have an AMD card. Hopefully I'm just doing something wrong.
 
Thanks.

Strange lighting issues; difference between SFML and GLFW

21 March 2013 - 02:47 PM

I recently got into OpenGL, trying to make a simple survival/exploration game just for fun. I'm using C++ with GLEW. I started off using SFML 2.0 for context creation, since it was pretty well documented and had the necessary features. However, I ran into some problems using SFML, and I switched to GLFW. At the same time I decided to organize my code and make it object oriented. I now have some issues getting some simple Phong lighting to work correctly. Using the exact same GLSL shader code, I receive completely different lighting results using SFML and GLFW. Since the project using GLFW has been reorganized, I thought I may have messed something up somewhere, but I just can't find anything. I've tried using a number of different lighting algorithms and variations. 

 

Here are screenshots comparing the same scene.

GLFW: Attached File  glfw.png   34.32KB   76 downloads,

SFML: Attached File  sfml.png   53.36KB   63 downloads

 

Apart from the GLFW scene being significantly darker, the light just doesn't spread as much. There's also a weird pure white spotlight directly underneath the light. Finally, the SFML scene has hard edges, which is actually what I want for this game. With non-smoothed normals, the edges should always appear, right?

 

I attached all the source files for the two projects, including the shaders used. I included the code blocks project files in case you'd like to test the lighting yourself. Really appreciate any help. Thanks

 

Edit 5:04: OK, well I had the idea to basically use color information to display the normal values. Needless to say for GLFW version, they are horribly wrong. It's clear that my program is passing the shader bad normal values. I'll need to look more closely at that aspect of my program.

 

Edit 5:23: The normal buffer was somehow not bound when I called glDrawArrays. The change that made the difference was calling glBindBuffers on the normal buffer just before glDrawArrays. Can someone clarify the order of operations on binding buffers? I've seen all sorts of different ways to do it in tutorials. I wasn't binding the buffers on the uv and vertex buffers every frame, but they were obviously being passed to the shaders fine. So why was the normal buffer NOT bound?


PARTNERS