• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
syskrank

OpenGL
Strange rendering behavior on different cards.

18 posts in this topic

Hi everyone.

I'm facing a quite strange problem.

 

I'm initializing my OpenGL context with SDL2-2.0.3, forcing it to use the core profile ( 3.3 ), also I am using GLEW for extensions. I am using Linux Mint 16 with stock kernel 3.11.0-12-generic with proprietary nvidia drivers from the repos ( version 319.32 ).

 

I've got the same setup on two different machines:

 

Lenovo V580c laptop with GeForce 740M

and

PC with GeForce GT 630.

Also, I've got another PC with AMD Radeon HD 6670 card ( OS - Debian  + opensource driver )

 

My OpenGL application for now renders only one rotating triangle in perpective projection with FPS-like camera floating around.

 

 

The problem is that I'm not seeing anything on my GF 630 ( on the second PC ).

 

The 740M doing things just fine,  like the HD6670.  Yet, on GF630 I'm getting a blank screen with properly cleaned color buffer and depth buffer.

 

Here's my init code:

void Window::Create() {
        // init sdl2
        if ( SDL_Init(  SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS ) ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init SDL.");
            exit( -1 );
        }
        //  set opengl 3.3
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
        SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
        // create window
        Uint32 flags = SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL;
        window = SDL_CreateWindow( title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, flags );
        if ( 0 == window ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to create SDL window.");
            exit( -1 );
        }
        glContext = SDL_GL_CreateContext( window );
        // init glew
        glewExperimental = true;
        if ( GLEW_OK != glewInit() ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init GLEW." );
            exit( -1 );
        }
        HideCursor( true );
        WarpCursorXY( width / 2, height / 2 );
    }

This one inits some default states

    void RenderingEngine::InitGLDefaults(){
        glClearColor( 0.0f, 0.0f, 0.4f, 1.0f );
        glEnable( GL_DEPTH_TEST );
        glDepthFunc( GL_LESS );
        glEnable( GL_CULL_FACE );
        glCullFace( GL_BACK );
    }

This is how I render meshes:

    void RenderingEngine::RenderMesh( const Mesh& mesh ) {
        glEnableVertexAttribArray( 0 ); // vertices
        glEnableVertexAttribArray( 1 ); // texture coords
        glEnableVertexAttribArray( 2 ); // normals

        glBindBuffer( GL_ARRAY_BUFFER, mesh.vbo );
        glVertexAttribPointer( 0, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), 0 ); // vert coords
        glVertexAttribPointer( 1, 2, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (3 * sizeof( float )) ); // tex coords
        glVertexAttribPointer( 2, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (( 3 + 2 ) * sizeof( float )) ); // normals

        glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, mesh.ibo );
        glDrawElements( GL_TRIANGLES, mesh.drawSize, GL_UNSIGNED_INT, 0 );

        glDisableVertexAttribArray( 0 );
        glDisableVertexAttribArray( 1 );
        glDisableVertexAttribArray( 2 );
    }

Here is a part of a main cycle, which involves rendering:

    void Application::Render() {
        render::RenderingEngine::RenderClear();
        testShader.Bind();

        glm::mat4 modelMatrix = testTransform.GetModelMatrix();
        glm::mat4 mvp = testCamera.GetViewProjection() * modelMatrix;

        testShader.SetUniformMat4( "MVP", mvp );
        render::RenderingEngine::RenderMesh( testMesh );
        testShader.Unbind();
        SDL_GL_SwapWindow( window.window );
    }

Again: this works on 740M and HD6670, but draws a clean screen on  GT630.

 

Halp!

Edited by syskrank
0

Share this post


Link to post
Share on other sites

you do not clear depth buffer in posted code. Perhaps different defaults in memory of depth buffer between ati and nvidia allows for one to pass depth test while one doesn't. Anyway, try to isolate what you can, first only do a clear on both cards, than disable all tests (alpha, depth, stencil) and so on.

0

Share this post


Link to post
Share on other sites

Since the glDebugMessageCallback?-stuff is rather new and probably not widely supported (I see core in 4.3 when you are using 3.3) it might be well advised to wrap all your opengl-calls with a simple macro:

void CheckOpenGLError(const char* stmt, const char* fname, int line)
{
	GLenum err = glGetError();
	if(err != GL_NO_ERROR)
	{
              // handle error here
	}
}

#ifdef _DEBUG
#define GL_CHECK(stmt) { \
	stmt; \
	CheckOpenGLError(#stmt, __FILE__, __LINE__); \
		}
#else
#define GL_CHECK(stmt) stmt
#endif

Which you can then use:

GL_CHECK(glEnableVertexAttribArray( 0 ));

and it will tell you if something is wrong in debug (compile) mode. You do need to wrap it around EVERY function that you have, or else uncaught error codes will leak to other parts of the program and cause another function to report the error instead. To end with a generic OpenGL-rant: Thats why OpenGLs global error handling and/or global state b... stuff sucks. glDebugMessageCallback is a nice step in the right direction, but for now I would recommend to use both of those side by side.

 

EDIT: Just rechecked, you might want to keep the macro around for even faster debuggin anyway, since at least on my two PCs there is no way to get the exact line of error from the glDebugMessageCallback, e.g. throwing a exception thrown is kept inside the GPU-driver dlls. With the macro you can assert, throw an exception, set a breakpoint etc.. and be fine. Repeated, use both of them side by side, as its supported in 4.2 I assume it could also work for 3.3, even though the specification tells otherwise.

Edited by Juliean
0

Share this post


Link to post
Share on other sites

EDIT: Just rechecked, you might want to keep the macro around for even faster debuggin anyway, since at least on my two PCs there is no way to get the exact line of error from the glDebugMessageCallback, e.g. throwing a exception thrown is kept inside the GPU-driver dlls.

glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
0

Share this post


Link to post
Share on other sites

When I come across things like this I start going through all the opengl settings I can think of.

 

I find it is usually a default setting problem. (sadly not always, but it's a good place to start)

 

For example the depth buffer may be disabled by default on one machine, and enabled by default on another.

 

Textures could be disabled on one and enabled on another

 

Culling, alpha blending, draw colour, etc. can all end up giving you a blank screen. You are setting a couple of them, but I don't see you bind a texture, I don't see you enabling texture 2D.

 

I see you passing in texture coords though.

 

After that you have to look at your shader, do the obvious things (check it's compiling, check it's linking, check the drivers are up to date,....)

0

Share this post


Link to post
Share on other sites

Wow, thanks a lot for the replies :) Didn't expect that at all after silent sdl2 forums :)

 

 

you do not clear depth buffer in posted code. Perhaps different defaults in memory of depth buffer between ati and nvidia allows for one to pass depth test while one doesn't. Anyway, try to isolate what you can, first only do a clear on both cards, than disable all tests (alpha, depth, stencil) and so on.

I do, I swear :) Rechecked everything, I am calling to glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) before every frame. So that's ok.

Also, I'm getting sane rendering results on both nvidia and amd cards, there is an another nvidia card, that causing problems ( as I said before ).

 

Like this:

GF740M - OK

HD6770 - OK

GF630 - ERROR

 

 


Which you can then use:

GL_CHECK(glEnableVertexAttribArray( 0 ));

 

Well, thanks a lot for debugging method suggestion. I've implemented this and integrated it with my logging system.

 

My rendering method now looks like:

    // TODO: fix rendering on GF630
    void RenderingEngine::RenderMesh( const Mesh& mesh ) {
        GL_CHECK(glEnableVertexAttribArray( 0 )); // vertices
        GL_CHECK(glEnableVertexAttribArray( 1 )); // texture coords
        GL_CHECK(glEnableVertexAttribArray( 2 )); // normals

        GL_CHECK(glBindBuffer( GL_ARRAY_BUFFER, mesh.vbo ));
        GL_CHECK(glVertexAttribPointer( 0, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), 0 )); // vert coords
        GL_CHECK(glVertexAttribPointer( 1, 2, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (3 * sizeof( float )) )); // tex coords
        GL_CHECK(glVertexAttribPointer( 2, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (( 3 + 2 ) * sizeof( float )) )); // normals

        glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, mesh.ibo );
        glDrawElements( GL_TRIANGLES, mesh.drawSize, GL_UNSIGNED_INT, 0 );

        glDisableVertexAttribArray( 0 );
        glDisableVertexAttribArray( 1 );
        glDisableVertexAttribArray( 2 );
    }

Also, I've changed the profile to OpenGL core 4.3.

And now I'm getting these messages ( same on 3.3 ):

Message: (1396343898768): Running with OpenGL 4.3.0 NVIDIA 319.32
Error: (1396343898768): OpenGL error in 'glEnableVertexAttribArray( 0 )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 13: invalid enumerant
Error: (1396343898768): OpenGL error in 'glVertexAttribPointer( 0, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), 0 )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 18: invalid operation
Error: (1396343898768): OpenGL error in 'glVertexAttribPointer( 1, 2, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (3 * sizeof( float )) )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 19: invalid operation
Error: (1396343898768): OpenGL error in 'glVertexAttribPointer( 2, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (( 3 + 2 ) * sizeof( float )) )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 20: invalid operation

Looks like it's something with the opengl settings, yeah. But I've failed to guess what exactly.

0

Share this post


Link to post
Share on other sites

OK, I figured that out.

That was completely my fault with SDL2 context initialization.

For some reason, GF740M and HD6670 were indifferent to SDL_GL_SetAttribute() call order, but on GF630 that spoiled everything.

 

The error was at the point where I created a window:

void Window::Create() {
        // init sdl2
        if ( SDL_Init(  SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS ) ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init SDL.");
            exit( -1 );
        }
        //  set opengl 3.3
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
        SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
        // create window
        Uint32 flags = SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL;
        window = SDL_CreateWindow( title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, flags );
        if ( 0 == window ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to create SDL window.");
            exit( -1 );
        }
        glContext = SDL_GL_CreateContext( window );
        // init glew
        glewExperimental = true;
        if ( GLEW_OK != glewInit() ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init GLEW." );
            exit( -1 );
        }
        HideCursor( true );
        WarpCursorXY( width / 2, height / 2 );
    }

Notice that bunch of SDL_GL_* calls ?

They must be placed !!!AFTER!!! SDL_GL_CreateContext(). i.e.

void Window::Create() {
        // init sdl2
        if ( SDL_Init(  SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS ) ) {
                Logger::WriteLog( LOG_S_ERROR, "Unable to init SDL.");
                exit( -1 );
            }
        // create window
        Uint32 flags = SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL;
        window = SDL_CreateWindow( title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, flags );
        if ( 0 == window ) {
                Logger::WriteLog( LOG_S_ERROR, "Unable to create SDL window.");
                exit( -1 );
            }
        glContext = SDL_GL_CreateContext( window );
        // init glew
        glewExperimental = true;
        if ( GLEW_OK != glewInit() ) {
                Logger::WriteLog( LOG_S_ERROR, "Unable to init GLEW." );
                exit( -1 );
            }
        // set gl attribs
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 4 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
        SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 16 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_FLAGS, SDL_GL_CONTEXT_DEBUG_FLAG );
        // set vsync
        SDL_GL_SetSwapInterval( 1 );
        // manage cursor
        HideCursor( true );
        WarpCursorXY( width / 2, height / 2 );
    }

I changed it, and everything started to work ( rechecked on GF740 - still good results ). Hey, bunny! :)

 

5089545.png

 

Thanks a lot to everyone.

 

Hope this will help somebody in future.

0

Share this post


Link to post
Share on other sites

I believe now you are initializing the OpenGL context with SDL's default values, whatever they are (ie. your SDL_GL_SetAttribute() calls have no effect)

0

Share this post


Link to post
Share on other sites

AgentC is correct, those SDL_GL_SetAttribute calls do nothing at all if placed after context creation.

Edited by Mona2000
0

Share this post


Link to post
Share on other sites

Wow, that's strange, didn't read docs enough. If I couldn't find the faulty one, then it will be better to completely remove them.

0

Share this post


Link to post
Share on other sites

I just want to point out that glGetError actually causes significant performance overhead, because it triggers pipeline flushes. In my case I found it advantageous to set a separate define to call it after every GL call and only enable that functionality periodically as a check or when something went wrong.

0

Share this post


Link to post
Share on other sites

Hello again, everyone.

 


I believe now you are initializing the OpenGL context with SDL's default values, whatever they are (ie. your SDL_GL_SetAttribute() calls have no effect)

 

Well, it seems to be like this.

But what are these defaults?

 

I've placed

SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 4 );
SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );

before window creation again, and that's causing errors to appear.

 

When I remove these lines, all works OK, logger reports that system is started with OpenGL 4.3 on nvidia driver v 319.32.

 

My card supports OpenGL 4.3, thats for sure. Driver supports that card, so the issue is within SDL 2 then ?

0

Share this post


Link to post
Share on other sites

Have you tried requesting lower versions? 3.3 or 3.1 etc.

 

It certainly looks like you're doing everything correctly and it's how we do it on our projects.

0

Share this post


Link to post
Share on other sites

Yes, I tried. And, unfortunately, I've ended up with the same issues.  On GF630 every GL version request fails with 'invalid enumerant'/'invalid operation'.

Without version request it initializes GL 4.3 ( as max of what this GPU is capable of ) and everything seems to be fine.

 

Maybe it's something like this exact driver working weird with  this exact model of GPU card ?

I have the same driver on the laptop with GF740 and it's all ok there with SDL_GL_SetAttribute( SDL_GL_CONTEXT* ) calls.

 

Will try another driver on the next week and post here the results.

0

Share this post


Link to post
Share on other sites

Well, I've tried to specify the OpenGL version on GF630 with the new driver [ sorry, took more than a week to get to this point ] :

Message: (1397464013393): Running with OpenGL 4.4.0 NVIDIA 331.67

... and now I'm getting the same strange error strings just like before. I don't know now whom to fight:

 

Linux OS ( 2/3 PCs works great ),

SDL2 ( weird things on the inside ??? )

or my own code ( seems more obvious, but I'm doing the standard init routine, and again - it works out on 2 of 3 PCs ),

or even the nVIDIA card/drivers  ???

 

Maybe I can go in some hardcore way with some #ifdefs and platform-dependent code, using WinAPI for Win32 and X11 window init for *nix systems. That should cut off the SDL2 part away.

But is it worth it ?

0

Share this post


Link to post
Share on other sites

Which version of the GF630 do you have? Only the Kepler version supports GL 4.3, the others only support up to 4.2 (according to NVIDIA specs). I don't think much will come from specifying it with the new driver as I'm thinking it's a hardware limitation. So with that, I'd say try switching it to 4.0-4.2 instead of 4.3 and see if it works then.

 

 

If you suspect SDL, then you can try switching to GLFW and see if the problem persists. :)

0

Share this post


Link to post
Share on other sites

It's seems that hardware is OK - by default the 4.4 profile is initialized ( it is when version demands are commented-out from the code ) - my app gets these numbers from the OpenGL version string.

 

And I was trying to get a 3.3 core in the first part.

 

Well, the migration to the GLFW is still possible, but I feel the input system there a bit tricky + it seems that GLFW is limiting itself to a 60 FPS.

I'll give it another try in order to find the guilty one :)

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By DaniDesu
      #include "MyEngine.h" int main() { MyEngine myEngine; myEngine.run(); return 0; } MyEngine.h
      #pragma once #include "MyWindow.h" #include "MyShaders.h" #include "MyShapes.h" class MyEngine { private: GLFWwindow * myWindowHandle; MyWindow * myWindow; public: MyEngine(); ~MyEngine(); void run(); }; MyEngine.cpp
      #include "MyEngine.h" MyEngine::MyEngine() { MyWindow myWindow(800, 600, "My Game Engine"); this->myWindow = &myWindow; myWindow.createWindow(); this->myWindowHandle = myWindow.getWindowHandle(); // Load all OpenGL function pointers for use gladLoadGLLoader((GLADloadproc)glfwGetProcAddress); } MyEngine::~MyEngine() { this->myWindow->destroyWindow(); } void MyEngine::run() { MyShaders myShaders("VertexShader.glsl", "FragmentShader.glsl"); MyShapes myShapes; GLuint vertexArrayObjectHandle; float coordinates[] = { 0.5f, 0.5f, 0.0f, 0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; vertexArrayObjectHandle = myShapes.drawTriangle(coordinates); while (!glfwWindowShouldClose(this->myWindowHandle)) { glClearColor(0.5f, 0.5f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw something glUseProgram(myShaders.getShaderProgram()); glBindVertexArray(vertexArrayObjectHandle); glDrawArrays(GL_TRIANGLES, 0, 3); glfwSwapBuffers(this->myWindowHandle); glfwPollEvents(); } } MyShaders.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> #include "MyFileHandler.h" class MyShaders { private: const char * vertexShaderFileName; const char * fragmentShaderFileName; const char * vertexShaderCode; const char * fragmentShaderCode; GLuint vertexShaderHandle; GLuint fragmentShaderHandle; GLuint shaderProgram; void compileShaders(); public: MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName); ~MyShaders(); GLuint getShaderProgram(); const char * getVertexShaderCode(); const char * getFragmentShaderCode(); }; MyShaders.cpp
      #include "MyShaders.h" MyShaders::MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName) { this->vertexShaderFileName = vertexShaderFileName; this->fragmentShaderFileName = fragmentShaderFileName; // Load shaders from files MyFileHandler myVertexShaderFileHandler(this->vertexShaderFileName); this->vertexShaderCode = myVertexShaderFileHandler.readFile(); MyFileHandler myFragmentShaderFileHandler(this->fragmentShaderFileName); this->fragmentShaderCode = myFragmentShaderFileHandler.readFile(); // Compile shaders this->compileShaders(); } MyShaders::~MyShaders() { } void MyShaders::compileShaders() { this->vertexShaderHandle = glCreateShader(GL_VERTEX_SHADER); this->fragmentShaderHandle = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(this->vertexShaderHandle, 1, &(this->vertexShaderCode), NULL); glShaderSource(this->fragmentShaderHandle, 1, &(this->fragmentShaderCode), NULL); glCompileShader(this->vertexShaderHandle); glCompileShader(this->fragmentShaderHandle); this->shaderProgram = glCreateProgram(); glAttachShader(this->shaderProgram, this->vertexShaderHandle); glAttachShader(this->shaderProgram, this->fragmentShaderHandle); glLinkProgram(this->shaderProgram); return; } GLuint MyShaders::getShaderProgram() { return this->shaderProgram; } const char * MyShaders::getVertexShaderCode() { return this->vertexShaderCode; } const char * MyShaders::getFragmentShaderCode() { return this->fragmentShaderCode; } MyWindow.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyWindow { private: GLFWwindow * windowHandle; int windowWidth; int windowHeight; const char * windowTitle; public: MyWindow(int windowWidth, int windowHeight, const char * windowTitle); ~MyWindow(); GLFWwindow * getWindowHandle(); void createWindow(); void MyWindow::destroyWindow(); }; MyWindow.cpp
      #include "MyWindow.h" MyWindow::MyWindow(int windowWidth, int windowHeight, const char * windowTitle) { this->windowHandle = NULL; this->windowWidth = windowWidth; this->windowWidth = windowWidth; this->windowHeight = windowHeight; this->windowTitle = windowTitle; glfwInit(); } MyWindow::~MyWindow() { } GLFWwindow * MyWindow::getWindowHandle() { return this->windowHandle; } void MyWindow::createWindow() { // Use OpenGL 3.3 and GLSL 3.3 glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // Limit backwards compatibility glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // Prevent resizing window glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // Create window this->windowHandle = glfwCreateWindow(this->windowWidth, this->windowHeight, this->windowTitle, NULL, NULL); glfwMakeContextCurrent(this->windowHandle); } void MyWindow::destroyWindow() { glfwTerminate(); } MyShapes.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyShapes { public: MyShapes(); ~MyShapes(); GLuint & drawTriangle(float coordinates[]); }; MyShapes.cpp
      #include "MyShapes.h" MyShapes::MyShapes() { } MyShapes::~MyShapes() { } GLuint & MyShapes::drawTriangle(float coordinates[]) { GLuint vertexBufferObject{}; GLuint vertexArrayObject{}; // Create a VAO glGenVertexArrays(1, &vertexArrayObject); glBindVertexArray(vertexArrayObject); // Send vertices to the GPU glGenBuffers(1, &vertexBufferObject); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(coordinates), coordinates, GL_STATIC_DRAW); // Dertermine the interpretation of the array buffer glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (void *)0); glEnableVertexAttribArray(0); // Unbind the buffers glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); return vertexArrayObject; } MyFileHandler.h
      #pragma once #include <cstdio> #include <cstdlib> class MyFileHandler { private: const char * fileName; unsigned long fileSize; void setFileSize(); public: MyFileHandler(const char * fileName); ~MyFileHandler(); unsigned long getFileSize(); const char * readFile(); }; MyFileHandler.cpp
      #include "MyFileHandler.h" MyFileHandler::MyFileHandler(const char * fileName) { this->fileName = fileName; this->setFileSize(); } MyFileHandler::~MyFileHandler() { } void MyFileHandler::setFileSize() { FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fseek(fileHandle, 0L, SEEK_END); this->fileSize = ftell(fileHandle); rewind(fileHandle); fclose(fileHandle); return; } unsigned long MyFileHandler::getFileSize() { return (this->fileSize); } const char * MyFileHandler::readFile() { char * buffer = (char *)malloc((this->fileSize)+1); FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fread(buffer, this->fileSize, sizeof(char), fileHandle); fclose(fileHandle); buffer[this->fileSize] = '\0'; return buffer; } VertexShader.glsl
      #version 330 core layout (location = 0) vec3 VertexPositions; void main() { gl_Position = vec4(VertexPositions, 1.0f); } FragmentShader.glsl
      #version 330 core out vec4 FragmentColor; void main() { FragmentColor = vec4(1.0f, 0.0f, 0.0f, 1.0f); } I am attempting to create a simple engine/graphics utility using some object-oriented paradigms. My first goal is to get some output from my engine, namely, a simple red triangle.
      For this goal, the MyShapes class will be responsible for defining shapes such as triangles, polygons etc. Currently, there is only a drawTriangle() method implemented, because I first wanted to see whether it works or not before attempting to code other shape drawing methods.
      The constructor of the MyEngine class creates a GLFW window (GLAD is also initialized here to load all OpenGL functionality), and the myEngine.run() method in Main.cpp is responsible for firing up the engine. In this run() method, the shaders get loaded from files via the help of my FileHandler class. The vertices for the triangle are processed by the myShapes.drawTriangle() method where a vertex array object, a vertex buffer object and vertrex attributes are set for this purpose.
      The while loop in the run() method should be outputting me the desired red triangle, but all I get is a grey window area. Why?
      Note: The shaders are compiling and linking without any errors.
      (Note: I am aware that this code is not using any good software engineering practices (e.g. exceptions, error handling). I am planning to implement them later, once I get the hang of OpenGL.)

       
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
    • By Tchom
      Hey devs!
       
      I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.
       
      Vertex Shader
      uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
      precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
    • By yahiko00
      Hi,
      Not sure to post at the right place, if not, please forgive me...
      For a game project I am working on, I would like to implement a 2D starfield as a background.
      I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
      I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

      Is there someone who could have an idea of a distribution which could result in such a starfield?
      Any insight would be appreciated
    • By afraidofdark
      I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.
      Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?
  • Popular Now