Jump to content

  • Log In with Google      Sign In   
  • Create Account


GLSL Lighting

  • You cannot reply to this topic
6 replies to this topic

#1 rafciok_6   Members   -  Reputation: 118

Like
0Likes
Like

Posted 26 February 2014 - 10:14 PM

I have a problem with my per vertex diffuse lighting implementation. I cannot find the problem. tried everything. Maybe someone can help.

Here is the code.

 

Here is how I calculate normals for every point and store it in vbo 


 

 

 
//Normals
glGenBuffers(1, &m_vbo_normals); 
glBindBuffer(GL_ARRAY_BUFFER, m_vbo_normals); 
for (int i = 0; i < 24; i += 3)
{
 
glm::vec3 normal = glm::normalize(glm::cross(glm::vec3(corners[indices[i + 1]]) -
glm::vec3(corners[indices[i]]), glm::vec3(corners[indices[i + 2]]) -
glm::vec3(corners[indices[i]])));
 
normals[i] = normal.x; 
normals[i + 1] = normal.y; 
normals[i + 2] = normal.z;
}
glBufferData(GL_ARRAY_BUFFER, 24 * sizeof(float), normals, GL_STATIC_DRAW); 
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(2);

 

Vertex.vert


 

#version 150 core
 

uniform mat4 modelview, projection;

uniform mat3 normalMatrix;
uniform vec3 lightPos;
uniform vec3 lambient;
uniform vec3 ldiffuse;
 
in vec4 position;
in vec3 normal;
in vec3 inColour; 
 
out vec3 outColour;
 
 
void main()
{
gl_Position = projection*modelview*position;
vec3 ambient = inColour * lambient;
 
//current position of the vertex 
vec3 pos =  vec3(modelview * position);
vec3 n = normalize(normalMatrix * normal);
 
//vector to pointing to the light pos 
vec3 I = normalize(lightPos - pos);
 
float dcont = max(0.0, dot(n, I));
vec3 diffuse = dcont * inColour * ldiffuse;
 
 
 
outColour = diffuse;
}
 

 

fragmet.frag


 

#version 150 core
 

in vec3 outColour;

  
out vec3 outFrag;
 
void main()
{
outFrag =  outColour;
}
 

and here is how I calculate normalMatrix and send uniform variables


 

glTranslatef( 0.0, 0.0, -2.0 );

glRotatef( m_euler[0], 1.0, 0.0, 0.0 );
glRotatef( m_euler[1], 0.0, 1.0, 0.0 );
glRotatef( m_euler[2], 0.0, 0.0, 1.0 );
 
glGetFloatv( GL_MODELVIEW_MATRIX, modelview );
glGetFloatv( GL_PROJECTION_MATRIX, projection);
 
glUniformMatrix4fv( m_uniform_modelview, 1, GL_FALSE, modelview );
glUniformMatrix4fv( m_uniform_projection, 1, GL_FALSE, projection );
glm::mat3* MV = new glm::mat3 (modelview[16]);
setUniform(m_shader, "normalMatrix", -glm::transpose(glm::inverse(*MV)));
 
 
glUniform3f(glGetUniformLocation(m_shader->GetProgramHandle(), "lambient"), 1.0, 1.0, 1.0);
glUniform3f(glGetUniformLocation(m_shader->GetProgramHandle(), "ldiffuse"), 1.0, 1.0, 1.0);
glUniform3f(glGetUniformLocation(m_shader->GetProgramHandle(), "lightPos"), 1.0, 1.0, 1.0);
glUniform3f(glGetUniformLocation(m_shader->GetProgramHandle(), "direction"), 1.0, 1.0, 0.0);

 

Thank you for help. 



Sponsor:

#2 Sponji   Members   -  Reputation: 1007

Like
1Likes
Like

Posted 27 February 2014 - 12:20 AM


glm::mat3* MV = new glm::mat3 (modelview[16]);
setUniform(m_shader, "normalMatrix", -glm::transpose(glm::inverse(*MV)));

 

Is there a reason you are using new there? You are also passing just one float to the mat3's constructor, and it's out of bounds.

I would suggest doing something like glm::transpose(glm::inverse(glm::mat3(modelview))). Also, I think you should normalize your normals after the loop where you calculate them.

 

You can use &matrix[0][0] or glm::value_ptr(matrix) to access a glm matrix, you don't have to use your own float arrays. But if you really want to, you can convert a float array to mat3 or mat4 with make_mat3 and make_mat4. Just include glm/gtc/type_ptr.hpp to get access to those functions.

 

Edit. I meant that modelview is glm::mat4. Also, maybe a better example code:

glm::mat4 modelview_matrix; 
glGetFloatv(GL_MODELVIEW_MATRIX, &modelview_matrix[0][0]);
glm::mat3 normal_matrix = glm::transpose(glm::inverse(glm::mat3(modelview_matrix)));
const float *pointer = glm::value_ptr(normal_matrix);

Edited by Sponji, 27 February 2014 - 12:26 AM.

Derp

#3 Ashaman73   Crossbones+   -  Reputation: 5846

Like
0Likes
Like

Posted 27 February 2014 - 01:09 AM


I have a problem with my per vertex diffuse lighting implementation.

Can you tell us, what the problem is ? Blank screen ? Rendered cube,  but with wrong lighting ? A screenshot would help.



#4 rafciok_6   Members   -  Reputation: 118

Like
0Likes
Like

Posted 27 February 2014 - 05:48 AM

 


I have a problem with my per vertex diffuse lighting implementation.

Can you tell us, what the problem is ? Blank screen ? Rendered cube,  but with wrong lighting ? A screenshot would help.

 

I have a black screen with nothing displayed on it. Is looks like I did something wrong with the matrices. 

 

I think one of the problem was with my normal matrix which I calculate. Sponij thanks for the answer really help. 

I changed my code for normal matrix according with code example above. But still is something wrong. 



#5 Irlan   Members   -  Reputation: 931

Like
0Likes
Like

Posted 27 February 2014 - 06:48 AM

I can give you some advices:
 
Things to check:
 
Uniform Locations;
Lighting computations in just one space;
Normals (you can test with glEnable(GL_CULL_FACE));
Creation order of the shaders;
 
Is very good to check opengl.org instead of trying to do a lot of crazy changes. It has the latest documentation too.


#6 Soxory   Members   -  Reputation: 110

Like
2Likes
Like

Posted 27 February 2014 - 09:15 AM

Not sure if this is the problem, but I don't think you're calculating the normals on a per vertex basis.  Rather it looks like you're calculating it on a per triangle basis.

for (int i = 0; i < 24; i += 3)//Using i += 3
{
//Calculating the normal for triangle starting at indices[i]
//When do you calculate the normal for the vertex at indices[i+1]?
//Maybe you wanted to store this normal for all the vertices in this triangle?
//At i == 3, you're calculating the normal for another triangle (I think)
glm::vec3 normal = glm::normalize(glm::cross(glm::vec3(corners[indices[i + 1]]) -
glm::vec3(corners[indices[i]]), glm::vec3(corners[indices[i + 2]]) -
glm::vec3(corners[indices[i]])));
 
//At i=3, you calculated a normal for the first vertex of the second triangle (I think)
//but storing it as the normal of the second vertex of the first triangle (I think)
//Is normals[i] the normal for corners[i]?  You're calculating the normal for corners[indices[i]].
//Since this is per vertex normals, you have 8 corners to go with your 8 normals?
//And the 24 indices make 8 triangles out of those 8 corners?
//Just want to make sure the numbers are right.  You're reusing i to itererate indices and normals.
normals[i] = normal.x; 
normals[i + 1] = normal.y; 
normals[i + 2] = normal.z;
} 

You appear to be calculating the normal of a triangle.  You should store that normal for all 3 vertices of the triangle.

 

Be careful, some vertices may be used for more than one triangle.  Zero the normals out at the start, in the loop add the new normal to what's already there.  Re-normalize the normals in a loop afterwards.


Edited by Soxory, 27 February 2014 - 10:39 AM.


#7 rafciok_6   Members   -  Reputation: 118

Like
0Likes
Like

Posted 27 February 2014 - 03:16 PM

You right. Thank you problem was with normal. Now everything is fine. 







PARTNERS