Jump to content

  • Log In with Google      Sign In   
  • Create Account


#Actualrocklobster

Posted 03 December 2012 - 08:46 AM

Hi guys,

I've got this problem where the shading of my objects changes when my camera moves. The obvious solution is that I'm not transforming my light into eye space before sending it to the shader, but I am doing this. Another thing to note is that when I leave my light AND normals in world space, it works fine.


glm::mat4 modelView = m_camera->GetView() * glm::mat4(1.0);

m_graphics->SetUniform(id, "u_ModelViewMatrix", 1, false, modelView);
m_graphics->SetUniform(id, "u_NormalMatrix", 1, false, glm::mat3(glm::vec3(modelView[0]), glm::vec3(modelView[1]), glm::vec3(modelView[2])));
m_graphics->SetUniform(id, "u_MVP", 1, false, m_graphics->GetProjMatrix() * modelView);

m_graphics->SetUniform(id, "Light.Position", modelView * glm::vec4(-100.0f, 50.0f, 30.0f, 1.0f));

This is how I'm calculating my view matrix

void ViewFrustum::Update()
{
  m_direction = glm::vec3(cos(m_yaw) * sin(m_pitch),
			  sin(m_yaw),
			  cos(m_yaw) * cos(m_pitch));
  m_view = glm::lookAt(m_position, m_position + m_direction, glm::vec3(0, 1, 0));
  m_right = glm::cross(m_up, m_direction);
}

The camera works fine and I move around the world with no troubles.

Here's my shader code. It's actually just storing the values in a GBuffer for deferred rendering.

Vertex
#version 400

layout (location = 0) in vec3 in_Position;
layout (location = 1) in vec3 in_Normal;
layout (location = 2) in vec3 in_TexCoord;

out vec3 Position;
out vec3 Normal;
out vec3 TexCoord;

uniform mat4 u_ModelViewMatrix;
uniform mat3 u_NormalMatrix;
uniform mat4 u_MVP;

void main()
{
	Position = in_Position;
	Normal =  normalize(u_NormalMatrix * in_Normal);
	TexCoord = in_TexCoord;

	gl_Position = u_MVP * vec4(Position, 1.0);
}

In the second pass I use the Light.Position variable:

vec3 shadePixel(vec3 pos, vec3 norm, vec3 diff)
{
	if (diff == vec3(1.0, 1.0, 1.0)) return vec3(1.0, 1.0, 1.0);

	vec3 s = normalize(vec3(Light.Position) - pos);
	float sDotN = max(dot(s, norm), 0.0);
	vec3 diffuse = Light.Intensity * diff * sDotN;

	return diffuse;
}
// get the values from GBuffer
vec4 diffusePass()
{
    vec3 pos = vec3(texture2D(positionTexture, UV));
    vec3 norm = vec3(texture2D(normalTexture, UV));
    vec3 diff = vec3(texture2D(diffuseTexture, UV));

    return vec4(shadePixel(pos, norm, diff), 1.0);
}

If I don't multiply the normal by u_NormalMatrix and I don't multiply the light position by the view matrix it works fine. Can't spot why it wont work in view space.

Thanks for the help in advance.

#2rocklobster

Posted 03 December 2012 - 02:29 AM

Hi guys,

I've got this problem where the shading of my objects changes when my camera moves. The obvious solution is that I'm not transforming my light into eye space before sending it to the shader, but I am doing this. Another thing to note is that when I leave my light AND normals in world space, it works fine.


glm::mat4 modelView = m_camera->GetView() * glm::mat4(1.0);

m_graphics->SetUniform(id, "u_ModelViewMatrix", 1, false, modelView);
m_graphics->SetUniform(id, "u_NormalMatrix", 1, false, glm::mat3(glm::vec3(modelView[0]), glm::vec3(modelView[1]), glm::vec3(modelView[2])));
m_graphics->SetUniform(id, "u_MVP", 1, false, m_graphics->GetProjMatrix() * modelView);

m_graphics->SetUniform(id, "Light.Position", modelView * glm::vec4(-100.0f, 50.0f, 30.0f, 1.0f));

This is how I'm calculating my view matrix

void ViewFrustum::Update()
{
  m_direction = glm::vec3(cos(m_yaw) * sin(m_pitch),
              sin(m_yaw),
              cos(m_yaw) * cos(m_pitch));
  m_view = glm::lookAt(m_position, m_position + m_direction, glm::vec3(0, 1, 0));
  m_right = glm::cross(m_up, m_direction);
}

The camera works fine and I move around the world with no troubles.

Here's my shader code. It's actually just storing the values in a GBuffer for deferred rendering.

Vertex
#version 400

layout (location = 0) in vec3 in_Position;
layout (location = 1) in vec3 in_Normal;
layout (location = 2) in vec3 in_TexCoord;

out vec3 Position;
out vec3 Normal;
out vec3 TexCoord;

uniform mat4 u_ModelViewMatrix;
uniform mat3 u_NormalMatrix;
uniform mat4 u_MVP;

void main()
{
	Position = in_Position;
	Normal =  normalize(u_NormalMatrix * in_Normal);
	TexCoord = in_TexCoord;

	gl_Position = u_MVP * vec4(Position, 1.0);
}

In the second pass I use the Light.Position variable:

vec3 shadePixel(vec3 pos, vec3 norm, vec3 diff)
{
	if (diff == vec3(1.0, 1.0, 1.0)) return vec3(1.0, 1.0, 1.0);

	vec3 s = normalize(vec3(Light.Position) - pos);
	float sDotN = max(dot(s, norm), 0.0);
	vec3 diffuse = Light.Intensity * diff * sDotN;

	return diffuse;
}

If I don't multiply the normal by u_NormalMatrix and I don't multiply the light position by the view matrix it works fine. Can't spot why it wont work in view space.

Thanks for the help in advance.

#1rocklobster

Posted 03 December 2012 - 02:27 AM

Hi guys,

I've got this problem where the shading of my objects changes when my camera moves. The obvious solution is that I'm not transforming my light into eye space before sending it to the shader, but I am doing this. Another thing to note is that when I leave my light AND normals in world space, it works fine.


glm::mat4 modelView = m_camera->GetView() * glm::mat4(1.0);

m_graphics->SetUniform(id, "u_ModelViewMatrix", 1, false, modelView);
m_graphics->SetUniform(id, "u_NormalMatrix", 1, false, glm::mat3(glm::vec3(modelView[0]), glm::vec3(modelView[1]), glm::vec3(modelView[2])));
m_graphics->SetUniform(id, "u_MVP", 1, false, m_graphics->GetProjMatrix() * modelView);

m_graphics->SetUniform(id, "Light.Position", modelView * glm::vec4(-100.0f, 50.0f, 30.0f, 1.0f));

This is how I'm calculating my view matrix

void ViewFrustum::Update()
{
  m_direction = glm::vec3(cos(m_yaw) * sin(m_pitch),
										sin(m_yaw),
										cos(m_yaw) * cos(m_pitch));
  m_view = glm::lookAt(m_position, m_position + m_direction, glm::vec3(0, 1, 0));
  m_right = glm::cross(m_up, m_direction);
}

The camera works fine and I move around the world with no troubles.

Here's my shader code. It's actually just storing the values in a GBuffer for deferred rendering.

Vertex
#version 400

layout (location = 0) in vec3 in_Position;
layout (location = 1) in vec3 in_Normal;
layout (location = 2) in vec3 in_TexCoord;

out vec3 Position;
out vec3 Normal;
out vec3 TexCoord;

uniform mat4 u_ModelViewMatrix;
uniform mat3 u_NormalMatrix;
uniform mat4 u_MVP;

void main()
{
    Position = in_Position;
    Normal =  normalize(u_NormalMatrix * in_Normal);
    TexCoord = in_TexCoord;

    gl_Position = u_MVP * vec4(Position, 1.0);
}

In the second pass I use the Light.Position variable:

vec3 shadePixel(vec3 pos, vec3 norm, vec3 diff)
{
    if (diff == vec3(1.0, 1.0, 1.0)) return vec3(1.0, 1.0, 1.0);

    vec3 s = normalize(vec3(Light.Position) - pos);
    float sDotN = max(dot(s, norm), 0.0);
    vec3 diffuse = Light.Intensity * diff * sDotN;

    return diffuse;
}

If I don't multiply the normal by u_NormalMatrix and I don't multiply the light position by the view matrix it works fine. No idea why this wouldn't work to be honest.

Thanks for the help in advance.

PARTNERS