Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!

Issue when passing normals to shaders

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 JackShannon   Members   -  Reputation: 492


Posted 20 November 2012 - 03:26 PM

[source lang="plain"]#version 120varying vec3 position;varying vec3 normal;void main(){ position = (vec3(gl_ModelViewMatrix*gl_Vertex)); //get the position of the vertex after translation, rotation, scaling normal = gl_NormalMatrix*gl_Normal; //get the normal direction, after translation, rotation, scaling gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;}[/source]

[source lang="plain"]#version 120varying vec3 position;varying vec3 normal;uniform vec3 lightColor;uniform vec3 surfaceColor;uniform float ambientScale;void main(){ vec3 lightAmbient = lightColor * ambientScale; vec3 surfaceAmbient = surfaceColor; vec3 ambient = lightAmbient * surfaceAmbient; //vec3 lightDirection = normalize(position); // *********** //float n_dot_1 = max(0.0, dot(normalize(normal), position)); vec3 lightDiffuse = lightColor; vec3 surfaceDiffuse = surfaceColor; //vec3 diffuse = (surfaceDiffuse * lightDiffuse) * n_dot_1; //gl_FragColor=vec4(ambient + diffuse, 1.0); gl_FragColor=vec4(ambient, 1.0);}[/source]

*********** When this line is uncommented everything runs fine and the fragment colour is correct:
Posted Image

However, if that line is uncommented then this is what happens;
Posted Image

Why is this? Is it because my normals VBO passing is wrong? Is it because of the way I've written the shader itself. It's almost like the varying variable is causing the fragment shader to crash and each fragment is being defaulted as (1.0, 1.0, 1.0, 1.0). The following code is my implimentation for the VBO's.

[source lang="cpp"]#ifndef MESH_H#define MESH_H#include <string>#include <iostream>#include <fstream>#include <sstream>#include <vector>#include <cml/cml.h>#include <GL/glfw.h>typedef cml::vector3f vec3f;class Mesh{ std::vector<vec3f> vertexList; std::vector<vec3f> normalList; std::vector<GLuint> indexList; std::vector<GLuint> normalIndexList; GLuint vbo[3]; public: Mesh(const std::string& fileName); void init(); void display();};#endif[/source]

[source lang="cpp"]#include "Mesh.h"Mesh::Mesh(const std::string& fileName){ std::string s; // std::ifstream file(fileName); std::ifstream file("untitled.obj"); std::string line; while( std::getline(file, line)) { std::istringstream iss(line); std::string result; if (std::getline( iss, result , ' ')) { if (result == "v") { float f; vertexList.push_back(vec3f(0, 0, 0)); for (int i = 0; i < 3; i++) { iss >> f; vertexList.back()[i] = f; } } else if (result == "vn") { float f; normalList.push_back(vec3f(0, 0, 0)); for (int i = 0; i < 3; i++) { iss >> f; normalList.back()[i] = f; } } else if (result == "f") { while (std::getline(iss, s, ' ')) { std::istringstream indexBlock(s); for (int i = 0; i < 3; i++) { std::string intString; if (std::getline(indexBlock, intString, '/')) { std::istringstream sstream(intString); int index = -1; sstream >> index; if (!(index == -1)) { if (i == 0) { indexList.push_back(index - 1); } else if (i == 1) { } else if (i == 2) { normalIndexList.push_back(index - 1); } } } } } } } } std::cout << "Loaded " << fileName << std::endl;}void Mesh::init(){ GLfloat tmp_normals[normalList.size()][3]; unsigned int index = 0; for (int c = 0; c < indexList.size(); c++) { tmp_normals[indexList.at©][0] = normalList.at(normalIndexList.at©)[0]; tmp_normals[indexList.at©][1] = normalList.at(normalIndexList.at©)[1]; tmp_normals[indexList.at©][2] = normalList.at(normalIndexList.at©)[2]; std::cout << normalList.at(normalIndexList.at©)[0] << " "; } glGenBuffers(3, vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); // vertices glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 3 * vertexList.size(), (const GLvoid*)& vertexList.front(), GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); // normals glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 3 * normalList.size(), (const GLvoid*)& tmp_normals[0], GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[2]); // indices glBufferData(GL_ELEMENT_ARRAY_BUFFER, indexList.size() * sizeof(GLuint), (const GLvoid*)& indexList.front(), GL_STATIC_DRAW);}void Mesh::display(){ glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); // vertices glVertexPointer(3, GL_FLOAT, 0, 0); glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); // normals glNormalPointer(GL_FLOAT, 0, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[2]); // indices glDrawElements(GL_TRIANGLES, indexList.size(), GL_UNSIGNED_INT, 0); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY);}[/source]

Thank you.


#2 NumberXaero   Prime Members   -  Reputation: 1936


Posted 20 November 2012 - 04:46 PM

Not sure about this line "GLfloat tmp_normals[normalList.size()][3];" does that even compile? normalList.size() is not a constant size and its not dynamically allocated array, so...

Try displaying your normal gl_FragColor = vec4(normal, 1.0); to debug if it looks correct.

Edited by NumberXaero, 20 November 2012 - 04:47 PM.

#3 JackShannon   Members   -  Reputation: 492


Posted 20 November 2012 - 05:02 PM

Thanks for the quick reply NumberXaero.

Yes it compiles, thinking about it now I'm not sure why it does but the couts from line 64 are correct.

Just tried as you suggested and I got the white object again. It seems it's just the varying variables not being passed properly. Very strange.

#4 NumberXaero   Prime Members   -  Reputation: 1936


Posted 20 November 2012 - 07:21 PM

Are you running an AMD card by any chance?

#5 JackShannon   Members   -  Reputation: 492


Posted 20 November 2012 - 07:26 PM

Nope, I'm running a 'NVIDIA GeForce GT 330M' on a 2010 Macbook Pro.

Edited by JackShannon, 20 November 2012 - 07:27 PM.

#6 NumberXaero   Prime Members   -  Reputation: 1936


Posted 20 November 2012 - 07:30 PM

I ask because under AMD ive had to do this with GLSL 120 to get the correct results, the built in gl_NormalMatrix was just wrong for whatever reason, dont think they ever did fix it, one of my vertex shaders still uses this

vec3 col0 = gl_ModelViewMatrixInverseTranspose[0].xyz;
vec3 col1 = gl_ModelViewMatrixInverseTranspose[1].xyz;
vec3 col2 = gl_ModelViewMatrixInverseTranspose[2].xyz;
mat3 nm = mat3(col0, col1, col2); // build the normal matrix and by pass gl_NormalMatrix

How are calculating your lighting, was it intended to be done in world space or view space, or are you just trying to make sure the everything setup correctly for now?

Edited by NumberXaero, 20 November 2012 - 07:38 PM.

#7 JackShannon   Members   -  Reputation: 492


Posted 20 November 2012 - 07:36 PM

Hmm, I just tried substituting that and I'm still getting the same issue.

After playing with it some more I've narrowed down an exact description of my issue;

Any 'varying' variable being called by fragment shader causes all the fragments to be white, even if it doesn't modify gl_FragColor at all!

#8 NumberXaero   Prime Members   -  Reputation: 1936


Posted 20 November 2012 - 07:47 PM

So the normal data being passed through the buffer was read in from the file correctly?
Lets say you did something like
"varying vec3 color; color = vec3(0.0, 1.0, 0.0);" in vertex shader and did "gl_FragColor = vec4(color, 1.0);" in fragment shader, you wouldnt get green?

#9 JackShannon   Members   -  Reputation: 492


Posted 20 November 2012 - 07:59 PM

No, I get white.

It's so strange, as soon as a varying is used it is always white.

I can do "gl_FragColor=vec4(0.0, 1.0, 0.0, 1.0);" and it's green no problem.

Should I change to a later glsl version?

E: and yes as far as I'm aware the normal data is fine.

E: and to answer your earlier question about lighting I'm just trying to get it set up correctly before worrying about specifics, I haven't used per fragment lighting before.

Edited by JackShannon, 20 November 2012 - 08:01 PM.

#10 NumberXaero   Prime Members   -  Reputation: 1936


Posted 20 November 2012 - 09:16 PM

Did you make any changes to the tmp_normals code section, because &tmp_normals[0] is being passed to glBufferData, if normalList is correct it might not be after copied to that funky tmp_normals array.

#11 JackShannon   Members   -  Reputation: 492


Posted 21 November 2012 - 01:18 PM

It seems it doesn't matter if my normals are being passed at the moment :( I am unable to pass any 'varying' var from the vertex shader to the fragment shader. I have simplified the problem as much as possible, to just use a triangle with no normals;

[source lang="cpp"]#ifndef MAIN_H#define MAIN_H#include <iostream>#include <GL/glew.h>#include <GL/glfw.h>#include <stdlib.h>#include <cml/cml.h>#include "Shader.h"typedef cml::vector3f vec3f;#endif[/source]

[source lang="cpp"]#include "main.h"Shader* myShader;void initGL(){ glClearColor(0, 0, 0, 1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(50, 640.0 / 480.0, 1, 1000); glMatrixMode(GL_MODELVIEW); glEnable(GL_DEPTH_TEST); myShader = new Shader("shader.vert", "shader.frag");}void display(){ glLoadIdentity(); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glUseProgram(myShader->program); glBegin(GL_TRIANGLES); glVertex3f(0, 1, -4); glVertex3f(-1, -1, -4); glVertex3f(1, -1, -4); glEnd();} int main(){ int running = GL_TRUE; if (!glfwInit()) { exit(EXIT_FAILURE); } if (!glfwOpenWindow(640, 480, 0, 0, 0, 0, 0, 0, GLFW_WINDOW)) { glfwTerminate(); exit( EXIT_FAILURE ); } GLenum err = glewInit(); if (GLEW_OK != err) { std::cout << "Error: " << glewGetErrorString(err) << std::cout; } initGL(); while (running) { display(); glfwSwapBuffers(); running = !glfwGetKey(GLFW_KEY_ESC) && glfwGetWindowParam(GLFW_OPENED); } glfwTerminate(); exit(EXIT_SUCCESS);}[/source]
[source lang="cpp"]#ifndef SHADER_H#define SHADER_H#include "main.h"#include <vector>#include <string>#include <fstream>class Shader{ void loadFile(const char* fileName, std::string& str); GLuint load(std::string& shaderSource, GLuint shaderType); public: GLuint vs; GLuint fs; GLuint program; Shader(const char* vs, const char* fs); ~Shader();};#endif[/source]
[source lang="cpp"]#include "Shader.h"void Shader::loadFile(const char* fileName, std::string& str){ std::ifstream in(fileName); if (!in.is_open()) { std::cout << "File " << fileName << " cannot be opened\n"; return; } char line[300]; while (!in.eof()) { in.getline(line, 300); str += line; str += '\n'; }}GLuint Shader::load(std::string& shaderSource, GLuint shaderType){ GLuint id = glCreateShader(shaderType); const char* csource = shaderSource.c_str(); glShaderSource(id, 1, &csource, NULL); glCompileShader(id); char error[1000]; glGetShaderInfoLog(id, 1000, NULL, error); std::cout << "Compile status: \n" << error << std::endl; return id;}Shader::Shader(const char* vn, const char* fn){ std::string source; loadFile(vn, source); vs = load(source, GL_VERTEX_SHADER); source = ""; loadFile(fn, source); vs = load(source, GL_FRAGMENT_SHADER); program = glCreateProgram(); glAttachShader(program, vs); glAttachShader(program, fs); glLinkProgram(program); glUseProgram(program);}Shader::~Shader(){ glDetachShader(program, vs); glDetachShader(program, fs); glDeleteShader(vs); glDeleteShader(fs); glDeleteProgram(program);}[/source]
[source lang="cpp"]#version 120varying vec4 color;void main(){ color = vec4(0.0, 1.0, 0.0, 1.0); gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;}[/source]
[source lang="cpp"]#version 120varying vec4 color;void main(){ //gl_FragColor=vec4(1.0, 0.0, 0.0, 1.0); // Triangle is red gl_FragColor=vec4(color); // Triangle is white. ('color' not being passed from vertex shader???)}[/source]
The commented code in shader.frag explains what is happening. Why isn't the varying vector being passed???
Thank you and sorry for asking the wrong question before.

#12 NumberXaero   Prime Members   -  Reputation: 1936


Posted 21 November 2012 - 03:12 PM

Youre using the GLFW library, what GL context does it create? a compatibility context or forward context, if its a forward context GLSL 120 shader might not work, although if that was the case I think you would be getting more errors then simply the varying problem, I may be wrong. From GLFW site FAQ

2.16 - Can any of the parameters to glfwOpenWindow be zero?
Yes. In fact, all parameters except the window mode can be zero, i.e. this is perfectly legal:
glfwOpenWindow(0, 0, 0, 0, 0, 0, 0, 0, GLFW_WINDOW);
Any parameter that is zero gets its desired value chosen by GLFW. Then, all parameters except the window mode are matched as closely as possible to what is available on the system. However, only the following parameters and hints are required to match exactly:
The window mode (i.e. the last parameter to glfwOpenWindow)
The GLFW_OPENGL_PROFILE hint, if set to a non-zero value
To find out the actual properties of the window and OpenGL context, use the glfwGetWindowParam function after the window has been opened.
To see what you get on your machine using only default values, you can use the defaults test in the GLFW source distribution.

#13 JackShannon   Members   -  Reputation: 492


Posted 21 November 2012 - 07:41 PM

NumberXaero, thank you so much for looking into all this for me.

I've now taken the time to completely rewrite and relearn how to implement everything and am using attributes instead. Most importantly I've learnt how to properly debug. I'm up and running, apologies for wasting time!!

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.