Jump to content
  • Advertisement
Sign in to follow this  
Prot

OpenGL Triangle rendered white only.

This topic is 947 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi folks,

I am following in2gpu's tutorials on modern OpenGL. The goal is to render a triangle. Everything worked fine so far, except for the color. The authors triangle has three color defined which makes the fragment shader render the triangle in different colors.  Although I strictly followed the tutorial my triangle is colored in a flat white.

 

The project now contains several classes which are responsible for freeglut and glew initialization. I also have a Vertex- and a Fragmentshader which look like this:

 

Vertex_Shader.glsl:

#version 330 core
layout(location = 0) in vec3 in_position;
layout(location = 1) in vec4 in_color;

out vec4 color;

void main(){

	color = in_color;
	gl_Position = vec4(in_position, 1);
}

Fragment_Shader.glsl:

#version 330 core

layout(location = 0) out vec4 out_color;

in vec4 color;

void main(){
 
 	out_color = color;
}

So the first thing to mention here is I am using version 330 while the author uses version 450 and I am not sure whether this is crucial here. Also there might be another source for the problem. I am using Visual Studio 2015 which does not seem to know .glsl-files. I created the shaders by adding a new Item. Here I chose a Pixel Shader File (.hlsl) and renamed it to .glsl. This did raise the following Error:

 

 

The "ConfigurationCustomBuildTool" rule is missing the "ShaderType" property.

 

I am able to build and run the project though, without errors. Also here is the Triangle class itself:

 

Triangle.cpp:

#include "Triangle.h"

Triangle::Triangle()
{
}

Triangle::~Triangle()
{
	//is going to be deleted in Models.cpp (inheritance)
}

void Triangle::Create()
{
	GLuint vao;
	GLuint vbo;

	glGenVertexArrays(1,&vao);
	glBindVertexArray(vao);

	std::vector<VertexFormat> vertices;
	vertices.push_back(VertexFormat(glm::vec3(0.25,-0.25,0.0),
		glm::vec4(1,0,0,1)));
	vertices.push_back(VertexFormat(glm::vec3(-0.25,-0.25,0.0),
		glm::vec4(0,1,0,1)));
	vertices.push_back(VertexFormat(glm::vec3(0.25,0.25,0.0),
		glm::vec4(0,0,1,1)));

	glGenBuffers(1,&vbo);
	glBindBuffer(GL_ARRAY_BUFFER,vbo);
	glBufferData(GL_ARRAY_BUFFER,sizeof(VertexFormat) * 3, &vertices[0],GL_STATIC_DRAW);
	glEnableVertexAttribArray(0);
	glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)0);
	glEnableVertexAttribArray(1);
	//you can use offsetof to get the offset of an attribute
	glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)(offsetof(VertexFormat,VertexFormat::color)));
	glBindVertexArray(0);
	//here we assign the values
	this->vao = vao;
	this->vbos.push_back(vbo);
}

void Triangle::Update()
{
	//Triangle does not have to be updated
}

void Triangle::Draw()
{
	glUseProgram(program);
	glBindVertexArray(vao);
	glDrawArrays(GL_TRIANGLES,0,3);

}

Although it seems that I followed the tutorial all the way, my triangle is still rendered white only. The are of course a lot more classes but I guess I should not post the entire project here. I can always post some additional information if it is needed. At the end it seems to me that something is wrong with the fragment shader. I also described my problem to the author. he could not have a look into my code/project but he suspected that there is something wrong with my attributes.

 

I am very new to both C++ and OpenGL therefore it is very difficult for me to debug (if it is even possible for shaders).

 

Glad for any help and thanks in advance!

 

 

Share this post


Link to post
Share on other sites
Advertisement

I am using Visual Studio 2015 which does not seem to know .glsl-files. I created the shaders by adding a new Item. Here I chose a Pixel Shader File (.hlsl) and renamed it to .glsl. This did raise the following Error:

 

 

The "ConfigurationCustomBuildTool" rule is missing the "ShaderType" property.

 

OFC Direct3D HLSL and OpenGL GLSL are two different things. You cannot do that. As a start.

Share this post


Link to post
Share on other sites

But it does seem to compile the Shaders, without printing any faling information to the console, fyi here the Shader_Manager-class:

#include "Shader_Manager.h"
#include <iostream>
#include <fstream>
#include <vector>

std::map<std::string, GLuint> Shader_Manager::programs;



Shader_Manager::Shader_Manager(void) {}

Shader_Manager::~Shader_Manager(void) 
{
	std::map<std::string, GLuint>::iterator i;
	for (i = programs.begin(); i != programs.end(); ++i)
	{
		GLuint pr = i->second;
		glDeleteProgram(pr);
	}
	programs.clear();
}

const GLuint Shader_Manager::GetShader(const std::string & shaderName)
{
	return programs.at(shaderName);
}

//reads and returns the contents of a file
std::string Shader_Manager::ReadShader(const std::string& filename)
{
	std::string shaderCode;
	std::ifstream file(filename,std::ios::in);

	if (!file.good())
	{
		std::cout << "Can't read file" << filename.c_str() << std::endl;
		std::terminate();
	}

	file.seekg(0,std::ios::end);
	shaderCode.resize((unsigned int)file.tellg());
	file.seekg(0,std::ios::beg);
	file.read(&shaderCode[0],shaderCode.size());
	file.close();
	return shaderCode;
}

//creates and compiles a shader (vertex or fragment)
GLuint Shader_Manager::CreateShader(GLenum shaderType,const std::string& source,const std::string& shaderName)
{
	int compile_result = 0;

	GLuint shader = glCreateShader(shaderType);
	const char *shader_code_ptr = source.c_str();
	const int shader_code_size = source.size();

	glShaderSource(shader,1,&shader_code_ptr,&shader_code_size);
	glCompileShader(shader);
	glGetShaderiv(shader,GL_COMPILE_STATUS,&compile_result);

	//check for errors
	if (compile_result == GL_FALSE)
	{
		int info_log_length = 0;
		glGetShaderiv(shader,GL_INFO_LOG_LENGTH,&info_log_length);
		std::vector<char> shader_log(info_log_length);
		glGetShaderInfoLog(shader,info_log_length,NULL,&shader_log[0]);
		std::cout << "ERROR compiling shader: " << shaderName.c_str() << std::endl << &shader_log[0] << std::endl;
		return 0;
	}

	return shader;
}

//uses ReadShader to extract the shader contents and  to create both shaders and load them into the program which is returned to be used in rendering loop
void Shader_Manager::CreateProgramm(const std::string& shaderName,
									  const std::string& vertexShaderFilename,
									  const std::string& fragmentShaderFilename)
{
	//read the shader files and save the code
	std::string vertex_shader_code = ReadShader(vertexShaderFilename);
	std::string fragment_shader_code = ReadShader(fragmentShaderFilename);

	GLuint vertex_shader = CreateShader(GL_VERTEX_SHADER,vertex_shader_code,"vertex shader");
	GLuint fragment_shader = CreateShader(GL_FRAGMENT_SHADER,fragment_shader_code,"fragment shader");

	int link_result = 0;
	//create the program handle, attach the shaders and link it
	GLuint program = glCreateProgram();
	glAttachShader(program,vertex_shader);
	glAttachShader(program,fragment_shader);

	glLinkProgram(program);
	glGetProgramiv(program,GL_LINK_STATUS,&link_result);
	//check for link errors
	if (link_result == GL_FALSE)
	{
		int info_log_length = 0;
		glGetProgramiv(program,GL_INFO_LOG_LENGTH,&info_log_length);
		std::vector<char> program_log(info_log_length);
		glGetProgramInfoLog(program,info_log_length,NULL,&program_log[0]);
		std::cout << "Shader Loader: LINK ERROR" << std::endl << &program_log[0] << std::endl;
		return;
	}

	programs[shaderName] = program;
}

I was assuming that if the shaders are failing to compile I would have seen it in the console.

Share this post


Link to post
Share on other sites
The glm::vec3 and glm::vec4 constructor will just make floats out of the integers. So that is not the issue.

I can't see anything that is wrong so far. Try to change the fragment shader to output e.g. out_color = vec4(1, 0, 0, 0). If it correctly outputs red you know everything is OK except the second vertex attribute.

Share this post


Link to post
Share on other sites

Nopes, unfortunately this didn't change the behaviour. 

 

I don't even know how to approach debugging shader output, any hints? Also what should be the Item Type of a glsl-shader under Visual Studio 2013? Currently my shader files are set to Custom Build Tool.

Edited by Prot

Share this post


Link to post
Share on other sites

Nopes, unfortunately this didn't change the behaviour. 
 
I don't even know how to approach debugging shader output, any hints? Also what should be the Item Type of a glsl-shader under Visual Studio 2013? Currently my shader files are set to Custom Build Tool.


check for glGetError is returning zero.

for debugging openGL, you can use CodeXL http://developer.amd.com/tools-and-sdks/opencl-zone/codexl/ which will give you an in-depth look at the resources on the gpu your program has created, and you can try walking through what's wrong. (note, this is an amd product, i'm not sure if it works on NVidia, but it's apparently the successor to glDebugger).

Share this post


Link to post
Share on other sites

layout(location = 0) out vec4 out_color; in fragment shader

 

i dont understand why you are applying any layouts in fragment shader, cant you just define; out vec4 out_color;?

 

 

second thing might be

 

 

glEnableVertexAttribArray(0);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)0);
glEnableVertexAttribArray(1);
//you can use offsetof to get the offset of an attribute
glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)(offsetof(VertexFormat,VertexFormat::color)));

 

 

there are few things here and in other parts of code:

 

first of all you should frist define vertexattribpointers

then enable them

 

 

glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)0);   (void*)0 this looks like a fail use this:

(void*)(offsetof(VertexFormat,VertexFormat::position)));

 

 

 

next



GLuint vao;
GLuint vbo;

glGenVertexArrays(1,&vao);
glBindVertexArray(vao);

this->vao = vao;
this->vbos.push_back(vbo);

this is another potential problem with sharing opengl data between structures, you shouldn't do that at least you are sure that you this->vao = vao; will return the same object. In my opinion you should make: 

GLuint vao; GLuint vbo;, and somehow i have feeling that aobject after quit are destroyed.

 

 

 

EDIT. Yes SlicerChubu is right you should additionally call ggeterror after every openglcall and log it somehow

void ShowGLERROR()
{
GLenum res = glGetError();
if ( res == GL_INVALID_ENUM) ALOG("GL_INVALID_ENUM");
if ( res == GL_INVALID_VALUE) ALOG("GL_INVALID_VALUE");
if ( res == GL_INVALID_OPERATION) ALOG("GL_INVALID_OPERATION");
if ( res == GL_OUT_OF_MEMORY) ALOG("GL_OUT_OF_MEMORY");
}

in ex:

ALOG("Creating vertex array object");

glGenVertexArrays(1,&vao);

ShowGLERROR();

ALOG("binding vao");

glBindVertexArray(vao);

ShowGLERROR();

Edited by WiredCat

Share this post


Link to post
Share on other sites

layout(location = 0) out vec4 out_color; in fragment shader

 

i dont understand why you are applying any layouts in fragment shader, cant you just define; out vec4 out_color;?

His code is fine. But you don't have to use the layout if you only render to one render target and your version would be fine, too.

 

 

second thing might be

 

 

glEnableVertexAttribArray(0);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)0);
glEnableVertexAttribArray(1);
//you can use offsetof to get the offset of an attribute
glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)(offsetof(VertexFormat,VertexFormat::color)));

 

 

there are few things here and in other parts of code:

 

first of all you should frist define vertexattribpointers

then enable them

 

 

glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)0);   (void*)0 this looks like a fail use this:

(void*)(offsetof(VertexFormat,VertexFormat::position)));

glEnableVertexAttribArray / glDisableVertexAttribArray only set states that get used when you actually use one of the draw commands. So it does not matter in what order he enables them here.

 

And 0 will be 0. You don't need a fancy offsetof to "find out" that the first variable in a struct is at offset 0.

 

 

EDIT. Yes SlicerChubu is right you should additionally call ggeterror after every openglcall and log it somehow
void ShowGLERROR()
{
GLenum res = glGetError();
if ( res == GL_INVALID_ENUM) ALOG("GL_INVALID_ENUM");
if ( res == GL_INVALID_VALUE) ALOG("GL_INVALID_VALUE");
if ( res == GL_INVALID_OPERATION) ALOG("GL_INVALID_OPERATION");
if ( res == GL_OUT_OF_MEMORY) ALOG("GL_OUT_OF_MEMORY");
}

in ex:

ALOG("Creating vertex array object");

glGenVertexArrays(1,&vao);

ShowGLERROR();

ALOG("binding vao");

glBindVertexArray(vao);

ShowGLERROR();

 

glGetError() is actually really painful to use. You also have to make sure there are no errors saved from previous calls before you use it.

Better option is to use one of the debug extensions (like GL_KHR_debug) that give you a callback function. In combination with a human readable stacktrace this works really well for me.

 

Something like this should be a start*: (*Stacktrace and batteries not included)

std::string sourceToString(GLenum source)
{
    switch (source) {
        case GL_DEBUG_SOURCE_API:               return "API";             //opengl calls
        case GL_DEBUG_SOURCE_WINDOW_SYSTEM:     return "window system";   //glx/wgl
        case GL_DEBUG_SOURCE_SHADER_COMPILER:   return "shader compiler";
        case GL_DEBUG_SOURCE_THIRD_PARTY:       return "third party";
        case GL_DEBUG_SOURCE_APPLICATION:       return "application";     //self injected
        case GL_DEBUG_SOURCE_OTHER:             return "other";
        default: return "unknown source(" + toString(source) + ")";
    }
}

std::string typeToString(GLenum type)
{
    switch (type) {
        case GL_DEBUG_TYPE_ERROR:               return "error";
        case GL_DEBUG_TYPE_DEPRECATED_BEHAVIOR: return "deprecated behavior";
        case GL_DEBUG_TYPE_UNDEFINED_BEHAVIOR:  return "undefined behavior";
        case GL_DEBUG_TYPE_PORTABILITY:         return "portability";
        case GL_DEBUG_TYPE_PERFORMANCE:         return "performance";
        case GL_DEBUG_TYPE_MARKER:              return "marker"; //Command stream annotation?
        case GL_DEBUG_TYPE_PUSH_GROUP:          return "push group";
        case GL_DEBUG_TYPE_POP_GROUP:           return "pop group";
        case GL_DEBUG_TYPE_OTHER:               return "other";
        default: return "unknown type(" + toString(type) + ")";
    }
}

std::string severityToString(GLenum severity)
{
    switch (severity) {
        case GL_DEBUG_SEVERITY_HIGH:            return "high";         //An error, typically from the API
        case GL_DEBUG_SEVERITY_MEDIUM:          return "medium";       //Some behavior marked deprecated has been used
        case GL_DEBUG_SEVERITY_LOW:             return "low";          //Something has invoked undefined behavior
        case GL_DEBUG_SEVERITY_NOTIFICATION:    return "notification"; //Some functionality the user relies upon is not portable
        default: return "unknown severity(" + toString(severity) + ")";
    }
}

#if defined(_WIN32)
    #define IF_MSVC_THEN_STDCALL_HERE __stdcall
#else
    #define IF_MSVC_THEN_STDCALL_HERE
#endif
void IF_MSVC_THEN_STDCALL_HERE coutKhrDebugMessage(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length, const GLchar* message, const void* userParam)
{
    string contextName = ""; //your gl context identification here
    string backtrace   = ""; //your  backtrace to string function here
    cout << "glDebugMessageCallback in context(" + contextName + ") souce(" + sourceToString(source) + ") type(" + typeToString(type) + ") severity(" + severityToString(severity) + ")" << endl
         << message << endl
         << "Backtrace:" << endl << backtrace;
}

//NOTE: if the GL context is not a debug context you maybe get no messages at all, even with glEnable(GL_DEBUG_OUTPUT)!
void enableDebugOutput()
{
    //There also is ATI_debug_output and ARB_debug_output, but we may never use them because GL_KHR_debug got implemented by all current drivers and is part of core.

    if (GL_KHR_debug) {
        cout << "GL_KHR_debug found, registering debug callback function" << endl;
        glDebugMessageCallback(&coutKhrDebugMessage, 0);
        glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS); //supposedly ruins performance, but gives callback in same thread as context after API call. So we are able to get a nice backtrace from where the call came from.
        
        //MESA also needs this to be enabled in debug context to generate output
        //In non debug context the driver is free to chose if he enables output at all. Or if the driver even exposes GL_KHR_debug string in the first place.
        glEnable(GL_DEBUG_OUTPUT);
    } else {
        cout << "GL_KHR_debug not available" << endl;
    }
}
Edited by Osbios

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!