• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Achivai
      Hey, I am semi-new to 3d-programming and I've hit a snag. I have one object, let's call it Object A. This object has a long int array of 3d xyz-positions stored in it's vbo as an instanced attribute. I am using these numbers to instance object A a couple of thousand times. So far so good. 
      Now I've hit a point where I want to remove one of these instances of object A while the game is running, but I'm not quite sure how to go about it. At first my thought was to update the instanced attribute of Object A and change the positions to some dummy number that I could catch in the vertex shader and then decide there whether to draw the instance of Object A or not, but I think that would be expensive to do while the game is running, considering that it might have to be done several times every frame in some cases. 
      I'm not sure how to proceed, anyone have any tips?
    • By fleissi
      Hey guys!

      I'm new here and I recently started developing my own rendering engine. It's open source, based on OpenGL/DirectX and C++.
      The full source code is hosted on github:
      https://github.com/fleissna/flyEngine

      I would appreciate if people with experience in game development / engine desgin could take a look at my source code. I'm looking for honest, constructive criticism on how to improve the engine.
      I'm currently writing my master's thesis in computer science and in the recent year I've gone through all the basics about graphics programming, learned DirectX and OpenGL, read some articles on Nvidia GPU Gems, read books and integrated some of this stuff step by step into the engine.

      I know about the basics, but I feel like there is some missing link that I didn't get yet to merge all those little pieces together.

      Features I have so far:
      - Dynamic shader generation based on material properties
      - Dynamic sorting of meshes to be renderd based on shader and material
      - Rendering large amounts of static meshes
      - Hierarchical culling (detail + view frustum)
      - Limited support for dynamic (i.e. moving) meshes
      - Normal, Parallax and Relief Mapping implementations
      - Wind animations based on vertex displacement
      - A very basic integration of the Bullet physics engine
      - Procedural Grass generation
      - Some post processing effects (Depth of Field, Light Volumes, Screen Space Reflections, God Rays)
      - Caching mechanisms for textures, shaders, materials and meshes

      Features I would like to have:
      - Global illumination methods
      - Scalable physics
      - Occlusion culling
      - A nice procedural terrain generator
      - Scripting
      - Level Editing
      - Sound system
      - Optimization techniques

      Books I have so far:
      - Real-Time Rendering Third Edition
      - 3D Game Programming with DirectX 11
      - Vulkan Cookbook (not started yet)

      I hope you guys can take a look at my source code and if you're really motivated, feel free to contribute :-)
      There are some videos on youtube that demonstrate some of the features:
      Procedural grass on the GPU
      Procedural Terrain Engine
      Quadtree detail and view frustum culling

      The long term goal is to turn this into a commercial game engine. I'm aware that this is a very ambitious goal, but I'm sure it's possible if you work hard for it.

      Bye,

      Phil
    • By tj8146
      I have attached my project in a .zip file if you wish to run it for yourself.
      I am making a simple 2d top-down game and I am trying to run my code to see if my window creation is working and to see if my timer is also working with it. Every time I run it though I get errors. And when I fix those errors, more come, then the same errors keep appearing. I end up just going round in circles.  Is there anyone who could help with this? 
       
      Errors when I build my code:
      1>Renderer.cpp 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2039: 'string': is not a member of 'std' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\ucrt\stddef.h(18): note: see declaration of 'std' 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2061: syntax error: identifier 'string' 1>c:\users\documents\opengl\game\game\renderer.cpp(28): error C2511: 'bool Game::Rendering::initialize(int,int,bool,std::string)': overloaded member function not found in 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.h(9): note: see declaration of 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.cpp(35): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(36): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(43): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>Done building project "Game.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========  
       
      Renderer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Renderer.h" #include "Timer.h" #include <iostream> namespace Game { GLFWwindow* window; /* Initialize the library */ Rendering::Rendering() { mClock = new Clock; } Rendering::~Rendering() { shutdown(); } bool Rendering::initialize(uint width, uint height, bool fullscreen, std::string window_title) { if (!glfwInit()) { return -1; } /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); glViewport(0, 0, (GLsizei)width, (GLsizei)height); glOrtho(0, (GLsizei)width, (GLsizei)height, 0, 1, -1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glfwSwapInterval(1); glEnable(GL_SMOOTH); glEnable(GL_DEPTH_TEST); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_TEXTURE_2D); glLoadIdentity(); return true; } bool Rendering::render() { /* Loop until the user closes the window */ if (!glfwWindowShouldClose(window)) return false; /* Render here */ mClock->reset(); glfwPollEvents(); if (mClock->step()) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glfwSwapBuffers(window); mClock->update(); } return true; } void Rendering::shutdown() { glfwDestroyWindow(window); glfwTerminate(); } GLFWwindow* Rendering::getCurrentWindow() { return window; } } Renderer.h
      #pragma once namespace Game { class Clock; class Rendering { public: Rendering(); ~Rendering(); bool initialize(uint width, uint height, bool fullscreen, std::string window_title = "Rendering window"); void shutdown(); bool render(); GLFWwindow* getCurrentWindow(); private: GLFWwindow * window; Clock* mClock; }; } Timer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include <time.h> #include "Timer.h" namespace Game { Clock::Clock() : mTicksPerSecond(50), mSkipTics(1000 / mTicksPerSecond), mMaxFrameSkip(10), mLoops(0) { mLastTick = tick(); } Clock::~Clock() { } bool Clock::step() { if (tick() > mLastTick && mLoops < mMaxFrameSkip) return true; return false; } void Clock::reset() { mLoops = 0; } void Clock::update() { mLastTick += mSkipTics; mLoops++; } clock_t Clock::tick() { return clock(); } } TImer.h
      #pragma once #include "Common.h" namespace Game { class Clock { public: Clock(); ~Clock(); void update(); bool step(); void reset(); clock_t tick(); private: uint mTicksPerSecond; ufloat mSkipTics; uint mMaxFrameSkip; uint mLoops; uint mLastTick; }; } Common.h
      #pragma once #include <cstdio> #include <cstdlib> #include <ctime> #include <cstring> #include <cmath> #include <iostream> namespace Game { typedef unsigned char uchar; typedef unsigned short ushort; typedef unsigned int uint; typedef unsigned long ulong; typedef float ufloat; }  
      Game.zip
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Triangle rendered white only.

This topic is 798 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi folks,

I am following in2gpu's tutorials on modern OpenGL. The goal is to render a triangle. Everything worked fine so far, except for the color. The authors triangle has three color defined which makes the fragment shader render the triangle in different colors.  Although I strictly followed the tutorial my triangle is colored in a flat white.

 

The project now contains several classes which are responsible for freeglut and glew initialization. I also have a Vertex- and a Fragmentshader which look like this:

 

Vertex_Shader.glsl:

#version 330 core
layout(location = 0) in vec3 in_position;
layout(location = 1) in vec4 in_color;

out vec4 color;

void main(){

	color = in_color;
	gl_Position = vec4(in_position, 1);
}

Fragment_Shader.glsl:

#version 330 core

layout(location = 0) out vec4 out_color;

in vec4 color;

void main(){
 
 	out_color = color;
}

So the first thing to mention here is I am using version 330 while the author uses version 450 and I am not sure whether this is crucial here. Also there might be another source for the problem. I am using Visual Studio 2015 which does not seem to know .glsl-files. I created the shaders by adding a new Item. Here I chose a Pixel Shader File (.hlsl) and renamed it to .glsl. This did raise the following Error:

 

 

The "ConfigurationCustomBuildTool" rule is missing the "ShaderType" property.

 

I am able to build and run the project though, without errors. Also here is the Triangle class itself:

 

Triangle.cpp:

#include "Triangle.h"

Triangle::Triangle()
{
}

Triangle::~Triangle()
{
	//is going to be deleted in Models.cpp (inheritance)
}

void Triangle::Create()
{
	GLuint vao;
	GLuint vbo;

	glGenVertexArrays(1,&vao);
	glBindVertexArray(vao);

	std::vector<VertexFormat> vertices;
	vertices.push_back(VertexFormat(glm::vec3(0.25,-0.25,0.0),
		glm::vec4(1,0,0,1)));
	vertices.push_back(VertexFormat(glm::vec3(-0.25,-0.25,0.0),
		glm::vec4(0,1,0,1)));
	vertices.push_back(VertexFormat(glm::vec3(0.25,0.25,0.0),
		glm::vec4(0,0,1,1)));

	glGenBuffers(1,&vbo);
	glBindBuffer(GL_ARRAY_BUFFER,vbo);
	glBufferData(GL_ARRAY_BUFFER,sizeof(VertexFormat) * 3, &vertices[0],GL_STATIC_DRAW);
	glEnableVertexAttribArray(0);
	glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)0);
	glEnableVertexAttribArray(1);
	//you can use offsetof to get the offset of an attribute
	glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)(offsetof(VertexFormat,VertexFormat::color)));
	glBindVertexArray(0);
	//here we assign the values
	this->vao = vao;
	this->vbos.push_back(vbo);
}

void Triangle::Update()
{
	//Triangle does not have to be updated
}

void Triangle::Draw()
{
	glUseProgram(program);
	glBindVertexArray(vao);
	glDrawArrays(GL_TRIANGLES,0,3);

}

Although it seems that I followed the tutorial all the way, my triangle is still rendered white only. The are of course a lot more classes but I guess I should not post the entire project here. I can always post some additional information if it is needed. At the end it seems to me that something is wrong with the fragment shader. I also described my problem to the author. he could not have a look into my code/project but he suspected that there is something wrong with my attributes.

 

I am very new to both C++ and OpenGL therefore it is very difficult for me to debug (if it is even possible for shaders).

 

Glad for any help and thanks in advance!

 

 

Share this post


Link to post
Share on other sites
Advertisement

I am using Visual Studio 2015 which does not seem to know .glsl-files. I created the shaders by adding a new Item. Here I chose a Pixel Shader File (.hlsl) and renamed it to .glsl. This did raise the following Error:

 

 

The "ConfigurationCustomBuildTool" rule is missing the "ShaderType" property.

 

OFC Direct3D HLSL and OpenGL GLSL are two different things. You cannot do that. As a start.

Share this post


Link to post
Share on other sites

But it does seem to compile the Shaders, without printing any faling information to the console, fyi here the Shader_Manager-class:

#include "Shader_Manager.h"
#include <iostream>
#include <fstream>
#include <vector>

std::map<std::string, GLuint> Shader_Manager::programs;



Shader_Manager::Shader_Manager(void) {}

Shader_Manager::~Shader_Manager(void) 
{
	std::map<std::string, GLuint>::iterator i;
	for (i = programs.begin(); i != programs.end(); ++i)
	{
		GLuint pr = i->second;
		glDeleteProgram(pr);
	}
	programs.clear();
}

const GLuint Shader_Manager::GetShader(const std::string & shaderName)
{
	return programs.at(shaderName);
}

//reads and returns the contents of a file
std::string Shader_Manager::ReadShader(const std::string& filename)
{
	std::string shaderCode;
	std::ifstream file(filename,std::ios::in);

	if (!file.good())
	{
		std::cout << "Can't read file" << filename.c_str() << std::endl;
		std::terminate();
	}

	file.seekg(0,std::ios::end);
	shaderCode.resize((unsigned int)file.tellg());
	file.seekg(0,std::ios::beg);
	file.read(&shaderCode[0],shaderCode.size());
	file.close();
	return shaderCode;
}

//creates and compiles a shader (vertex or fragment)
GLuint Shader_Manager::CreateShader(GLenum shaderType,const std::string& source,const std::string& shaderName)
{
	int compile_result = 0;

	GLuint shader = glCreateShader(shaderType);
	const char *shader_code_ptr = source.c_str();
	const int shader_code_size = source.size();

	glShaderSource(shader,1,&shader_code_ptr,&shader_code_size);
	glCompileShader(shader);
	glGetShaderiv(shader,GL_COMPILE_STATUS,&compile_result);

	//check for errors
	if (compile_result == GL_FALSE)
	{
		int info_log_length = 0;
		glGetShaderiv(shader,GL_INFO_LOG_LENGTH,&info_log_length);
		std::vector<char> shader_log(info_log_length);
		glGetShaderInfoLog(shader,info_log_length,NULL,&shader_log[0]);
		std::cout << "ERROR compiling shader: " << shaderName.c_str() << std::endl << &shader_log[0] << std::endl;
		return 0;
	}

	return shader;
}

//uses ReadShader to extract the shader contents and  to create both shaders and load them into the program which is returned to be used in rendering loop
void Shader_Manager::CreateProgramm(const std::string& shaderName,
									  const std::string& vertexShaderFilename,
									  const std::string& fragmentShaderFilename)
{
	//read the shader files and save the code
	std::string vertex_shader_code = ReadShader(vertexShaderFilename);
	std::string fragment_shader_code = ReadShader(fragmentShaderFilename);

	GLuint vertex_shader = CreateShader(GL_VERTEX_SHADER,vertex_shader_code,"vertex shader");
	GLuint fragment_shader = CreateShader(GL_FRAGMENT_SHADER,fragment_shader_code,"fragment shader");

	int link_result = 0;
	//create the program handle, attach the shaders and link it
	GLuint program = glCreateProgram();
	glAttachShader(program,vertex_shader);
	glAttachShader(program,fragment_shader);

	glLinkProgram(program);
	glGetProgramiv(program,GL_LINK_STATUS,&link_result);
	//check for link errors
	if (link_result == GL_FALSE)
	{
		int info_log_length = 0;
		glGetProgramiv(program,GL_INFO_LOG_LENGTH,&info_log_length);
		std::vector<char> program_log(info_log_length);
		glGetProgramInfoLog(program,info_log_length,NULL,&program_log[0]);
		std::cout << "Shader Loader: LINK ERROR" << std::endl << &program_log[0] << std::endl;
		return;
	}

	programs[shaderName] = program;
}

I was assuming that if the shaders are failing to compile I would have seen it in the console.

Share this post


Link to post
Share on other sites
The glm::vec3 and glm::vec4 constructor will just make floats out of the integers. So that is not the issue.

I can't see anything that is wrong so far. Try to change the fragment shader to output e.g. out_color = vec4(1, 0, 0, 0). If it correctly outputs red you know everything is OK except the second vertex attribute.

Share this post


Link to post
Share on other sites

Nopes, unfortunately this didn't change the behaviour. 

 

I don't even know how to approach debugging shader output, any hints? Also what should be the Item Type of a glsl-shader under Visual Studio 2013? Currently my shader files are set to Custom Build Tool.

Edited by Prot

Share this post


Link to post
Share on other sites

Nopes, unfortunately this didn't change the behaviour. 
 
I don't even know how to approach debugging shader output, any hints? Also what should be the Item Type of a glsl-shader under Visual Studio 2013? Currently my shader files are set to Custom Build Tool.


check for glGetError is returning zero.

for debugging openGL, you can use CodeXL http://developer.amd.com/tools-and-sdks/opencl-zone/codexl/ which will give you an in-depth look at the resources on the gpu your program has created, and you can try walking through what's wrong. (note, this is an amd product, i'm not sure if it works on NVidia, but it's apparently the successor to glDebugger).

Share this post


Link to post
Share on other sites

layout(location = 0) out vec4 out_color; in fragment shader

 

i dont understand why you are applying any layouts in fragment shader, cant you just define; out vec4 out_color;?

 

 

second thing might be

 

 

glEnableVertexAttribArray(0);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)0);
glEnableVertexAttribArray(1);
//you can use offsetof to get the offset of an attribute
glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)(offsetof(VertexFormat,VertexFormat::color)));

 

 

there are few things here and in other parts of code:

 

first of all you should frist define vertexattribpointers

then enable them

 

 

glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)0);   (void*)0 this looks like a fail use this:

(void*)(offsetof(VertexFormat,VertexFormat::position)));

 

 

 

next



GLuint vao;
GLuint vbo;

glGenVertexArrays(1,&vao);
glBindVertexArray(vao);

this->vao = vao;
this->vbos.push_back(vbo);

this is another potential problem with sharing opengl data between structures, you shouldn't do that at least you are sure that you this->vao = vao; will return the same object. In my opinion you should make: 

GLuint vao; GLuint vbo;, and somehow i have feeling that aobject after quit are destroyed.

 

 

 

EDIT. Yes SlicerChubu is right you should additionally call ggeterror after every openglcall and log it somehow

void ShowGLERROR()
{
GLenum res = glGetError();
if ( res == GL_INVALID_ENUM) ALOG("GL_INVALID_ENUM");
if ( res == GL_INVALID_VALUE) ALOG("GL_INVALID_VALUE");
if ( res == GL_INVALID_OPERATION) ALOG("GL_INVALID_OPERATION");
if ( res == GL_OUT_OF_MEMORY) ALOG("GL_OUT_OF_MEMORY");
}

in ex:

ALOG("Creating vertex array object");

glGenVertexArrays(1,&vao);

ShowGLERROR();

ALOG("binding vao");

glBindVertexArray(vao);

ShowGLERROR();

Edited by WiredCat

Share this post


Link to post
Share on other sites

layout(location = 0) out vec4 out_color; in fragment shader

 

i dont understand why you are applying any layouts in fragment shader, cant you just define; out vec4 out_color;?

His code is fine. But you don't have to use the layout if you only render to one render target and your version would be fine, too.

 

 

second thing might be

 

 

glEnableVertexAttribArray(0);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)0);
glEnableVertexAttribArray(1);
//you can use offsetof to get the offset of an attribute
glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)(offsetof(VertexFormat,VertexFormat::color)));

 

 

there are few things here and in other parts of code:

 

first of all you should frist define vertexattribpointers

then enable them

 

 

glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(VertexFormat),(void*)0);   (void*)0 this looks like a fail use this:

(void*)(offsetof(VertexFormat,VertexFormat::position)));

glEnableVertexAttribArray / glDisableVertexAttribArray only set states that get used when you actually use one of the draw commands. So it does not matter in what order he enables them here.

 

And 0 will be 0. You don't need a fancy offsetof to "find out" that the first variable in a struct is at offset 0.

 

 

EDIT. Yes SlicerChubu is right you should additionally call ggeterror after every openglcall and log it somehow
void ShowGLERROR()
{
GLenum res = glGetError();
if ( res == GL_INVALID_ENUM) ALOG("GL_INVALID_ENUM");
if ( res == GL_INVALID_VALUE) ALOG("GL_INVALID_VALUE");
if ( res == GL_INVALID_OPERATION) ALOG("GL_INVALID_OPERATION");
if ( res == GL_OUT_OF_MEMORY) ALOG("GL_OUT_OF_MEMORY");
}

in ex:

ALOG("Creating vertex array object");

glGenVertexArrays(1,&vao);

ShowGLERROR();

ALOG("binding vao");

glBindVertexArray(vao);

ShowGLERROR();

 

glGetError() is actually really painful to use. You also have to make sure there are no errors saved from previous calls before you use it.

Better option is to use one of the debug extensions (like GL_KHR_debug) that give you a callback function. In combination with a human readable stacktrace this works really well for me.

 

Something like this should be a start*: (*Stacktrace and batteries not included)

std::string sourceToString(GLenum source)
{
    switch (source) {
        case GL_DEBUG_SOURCE_API:               return "API";             //opengl calls
        case GL_DEBUG_SOURCE_WINDOW_SYSTEM:     return "window system";   //glx/wgl
        case GL_DEBUG_SOURCE_SHADER_COMPILER:   return "shader compiler";
        case GL_DEBUG_SOURCE_THIRD_PARTY:       return "third party";
        case GL_DEBUG_SOURCE_APPLICATION:       return "application";     //self injected
        case GL_DEBUG_SOURCE_OTHER:             return "other";
        default: return "unknown source(" + toString(source) + ")";
    }
}

std::string typeToString(GLenum type)
{
    switch (type) {
        case GL_DEBUG_TYPE_ERROR:               return "error";
        case GL_DEBUG_TYPE_DEPRECATED_BEHAVIOR: return "deprecated behavior";
        case GL_DEBUG_TYPE_UNDEFINED_BEHAVIOR:  return "undefined behavior";
        case GL_DEBUG_TYPE_PORTABILITY:         return "portability";
        case GL_DEBUG_TYPE_PERFORMANCE:         return "performance";
        case GL_DEBUG_TYPE_MARKER:              return "marker"; //Command stream annotation?
        case GL_DEBUG_TYPE_PUSH_GROUP:          return "push group";
        case GL_DEBUG_TYPE_POP_GROUP:           return "pop group";
        case GL_DEBUG_TYPE_OTHER:               return "other";
        default: return "unknown type(" + toString(type) + ")";
    }
}

std::string severityToString(GLenum severity)
{
    switch (severity) {
        case GL_DEBUG_SEVERITY_HIGH:            return "high";         //An error, typically from the API
        case GL_DEBUG_SEVERITY_MEDIUM:          return "medium";       //Some behavior marked deprecated has been used
        case GL_DEBUG_SEVERITY_LOW:             return "low";          //Something has invoked undefined behavior
        case GL_DEBUG_SEVERITY_NOTIFICATION:    return "notification"; //Some functionality the user relies upon is not portable
        default: return "unknown severity(" + toString(severity) + ")";
    }
}

#if defined(_WIN32)
    #define IF_MSVC_THEN_STDCALL_HERE __stdcall
#else
    #define IF_MSVC_THEN_STDCALL_HERE
#endif
void IF_MSVC_THEN_STDCALL_HERE coutKhrDebugMessage(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length, const GLchar* message, const void* userParam)
{
    string contextName = ""; //your gl context identification here
    string backtrace   = ""; //your  backtrace to string function here
    cout << "glDebugMessageCallback in context(" + contextName + ") souce(" + sourceToString(source) + ") type(" + typeToString(type) + ") severity(" + severityToString(severity) + ")" << endl
         << message << endl
         << "Backtrace:" << endl << backtrace;
}

//NOTE: if the GL context is not a debug context you maybe get no messages at all, even with glEnable(GL_DEBUG_OUTPUT)!
void enableDebugOutput()
{
    //There also is ATI_debug_output and ARB_debug_output, but we may never use them because GL_KHR_debug got implemented by all current drivers and is part of core.

    if (GL_KHR_debug) {
        cout << "GL_KHR_debug found, registering debug callback function" << endl;
        glDebugMessageCallback(&coutKhrDebugMessage, 0);
        glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS); //supposedly ruins performance, but gives callback in same thread as context after API call. So we are able to get a nice backtrace from where the call came from.
        
        //MESA also needs this to be enabled in debug context to generate output
        //In non debug context the driver is free to chose if he enables output at all. Or if the driver even exposes GL_KHR_debug string in the first place.
        glEnable(GL_DEBUG_OUTPUT);
    } else {
        cout << "GL_KHR_debug not available" << endl;
    }
}
Edited by Osbios

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement