Strange rendering behavior on different cards.

Started by
17 comments, last by Styves 10 years ago

Hi everyone.

I'm facing a quite strange problem.

I'm initializing my OpenGL context with SDL2-2.0.3, forcing it to use the core profile ( 3.3 ), also I am using GLEW for extensions. I am using Linux Mint 16 with stock kernel 3.11.0-12-generic with proprietary nvidia drivers from the repos ( version 319.32 ).

I've got the same setup on two different machines:

Lenovo V580c laptop with GeForce 740M

and

PC with GeForce GT 630.

Also, I've got another PC with AMD Radeon HD 6670 card ( OS - Debian + opensource driver )

My OpenGL application for now renders only one rotating triangle in perpective projection with FPS-like camera floating around.

The problem is that I'm not seeing anything on my GF 630 ( on the second PC ).

The 740M doing things just fine, like the HD6670. Yet, on GF630 I'm getting a blank screen with properly cleaned color buffer and depth buffer.

Here's my init code:


void Window::Create() {
        // init sdl2
        if ( SDL_Init(  SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS ) ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init SDL.");
            exit( -1 );
        }
        //  set opengl 3.3
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
        SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
        // create window
        Uint32 flags = SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL;
        window = SDL_CreateWindow( title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, flags );
        if ( 0 == window ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to create SDL window.");
            exit( -1 );
        }
        glContext = SDL_GL_CreateContext( window );
        // init glew
        glewExperimental = true;
        if ( GLEW_OK != glewInit() ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init GLEW." );
            exit( -1 );
        }
        HideCursor( true );
        WarpCursorXY( width / 2, height / 2 );
    }

This one inits some default states


    void RenderingEngine::InitGLDefaults(){
        glClearColor( 0.0f, 0.0f, 0.4f, 1.0f );
        glEnable( GL_DEPTH_TEST );
        glDepthFunc( GL_LESS );
        glEnable( GL_CULL_FACE );
        glCullFace( GL_BACK );
    }

This is how I render meshes:


    void RenderingEngine::RenderMesh( const Mesh& mesh ) {
        glEnableVertexAttribArray( 0 ); // vertices
        glEnableVertexAttribArray( 1 ); // texture coords
        glEnableVertexAttribArray( 2 ); // normals

        glBindBuffer( GL_ARRAY_BUFFER, mesh.vbo );
        glVertexAttribPointer( 0, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), 0 ); // vert coords
        glVertexAttribPointer( 1, 2, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (3 * sizeof( float )) ); // tex coords
        glVertexAttribPointer( 2, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (( 3 + 2 ) * sizeof( float )) ); // normals

        glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, mesh.ibo );
        glDrawElements( GL_TRIANGLES, mesh.drawSize, GL_UNSIGNED_INT, 0 );

        glDisableVertexAttribArray( 0 );
        glDisableVertexAttribArray( 1 );
        glDisableVertexAttribArray( 2 );
    }

Here is a part of a main cycle, which involves rendering:


    void Application::Render() {
        render::RenderingEngine::RenderClear();
        testShader.Bind();

        glm::mat4 modelMatrix = testTransform.GetModelMatrix();
        glm::mat4 mvp = testCamera.GetViewProjection() * modelMatrix;

        testShader.SetUniformMat4( "MVP", mvp );
        render::RenderingEngine::RenderMesh( testMesh );
        testShader.Unbind();
        SDL_GL_SwapWindow( window.window );
    }

Again: this works on 740M and HD6670, but draws a clean screen on GT630.

Halp!

Advertisement

Try creating the context in debug mode with:


SDL_GL_SetAttribute(SDL_GL_CONTEXT_FLAGS, SDL_GL_CONTEXT_DEBUG_FLAG)

and then enabling OpenGL debug output.

you do not clear depth buffer in posted code. Perhaps different defaults in memory of depth buffer between ati and nvidia allows for one to pass depth test while one doesn't. Anyway, try to isolate what you can, first only do a clear on both cards, than disable all tests (alpha, depth, stencil) and so on.

Since the glDebugMessageCallback?-stuff is rather new and probably not widely supported (I see core in 4.3 when you are using 3.3) it might be well advised to wrap all your opengl-calls with a simple macro:


void CheckOpenGLError(const char* stmt, const char* fname, int line)
{
	GLenum err = glGetError();
	if(err != GL_NO_ERROR)
	{
              // handle error here
	}
}

#ifdef _DEBUG
#define GL_CHECK(stmt) { \
	stmt; \
	CheckOpenGLError(#stmt, __FILE__, __LINE__); \
		}
#else
#define GL_CHECK(stmt) stmt
#endif

Which you can then use:


GL_CHECK(glEnableVertexAttribArray( 0 ));

and it will tell you if something is wrong in debug (compile) mode. You do need to wrap it around EVERY function that you have, or else uncaught error codes will leak to other parts of the program and cause another function to report the error instead. To end with a generic OpenGL-rant: Thats why OpenGLs global error handling and/or global state b... stuff sucks. glDebugMessageCallback is a nice step in the right direction, but for now I would recommend to use both of those side by side.

EDIT: Just rechecked, you might want to keep the macro around for even faster debuggin anyway, since at least on my two PCs there is no way to get the exact line of error from the glDebugMessageCallback, e.g. throwing a exception thrown is kept inside the GPU-driver dlls. With the macro you can assert, throw an exception, set a breakpoint etc.. and be fine. Repeated, use both of them side by side, as its supported in 4.2 I assume it could also work for 3.3, even though the specification tells otherwise.

EDIT: Just rechecked, you might want to keep the macro around for even faster debuggin anyway, since at least on my two PCs there is no way to get the exact line of error from the glDebugMessageCallback, e.g. throwing a exception thrown is kept inside the GPU-driver dlls.


glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);

When I come across things like this I start going through all the opengl settings I can think of.

I find it is usually a default setting problem. (sadly not always, but it's a good place to start)

For example the depth buffer may be disabled by default on one machine, and enabled by default on another.

Textures could be disabled on one and enabled on another

Culling, alpha blending, draw colour, etc. can all end up giving you a blank screen. You are setting a couple of them, but I don't see you bind a texture, I don't see you enabling texture 2D.

I see you passing in texture coords though.

After that you have to look at your shader, do the obvious things (check it's compiling, check it's linking, check the drivers are up to date,....)

Wow, thanks a lot for the replies :) Didn't expect that at all after silent sdl2 forums :)

you do not clear depth buffer in posted code. Perhaps different defaults in memory of depth buffer between ati and nvidia allows for one to pass depth test while one doesn't. Anyway, try to isolate what you can, first only do a clear on both cards, than disable all tests (alpha, depth, stencil) and so on.

I do, I swear :) Rechecked everything, I am calling to glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) before every frame. So that's ok.

Also, I'm getting sane rendering results on both nvidia and amd cards, there is an another nvidia card, that causing problems ( as I said before ).

Like this:

GF740M - OK

HD6770 - OK

GF630 - ERROR


Which you can then use:

GL_CHECK(glEnableVertexAttribArray( 0 ));

Well, thanks a lot for debugging method suggestion. I've implemented this and integrated it with my logging system.

My rendering method now looks like:


    // TODO: fix rendering on GF630
    void RenderingEngine::RenderMesh( const Mesh& mesh ) {
        GL_CHECK(glEnableVertexAttribArray( 0 )); // vertices
        GL_CHECK(glEnableVertexAttribArray( 1 )); // texture coords
        GL_CHECK(glEnableVertexAttribArray( 2 )); // normals

        GL_CHECK(glBindBuffer( GL_ARRAY_BUFFER, mesh.vbo ));
        GL_CHECK(glVertexAttribPointer( 0, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), 0 )); // vert coords
        GL_CHECK(glVertexAttribPointer( 1, 2, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (3 * sizeof( float )) )); // tex coords
        GL_CHECK(glVertexAttribPointer( 2, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (( 3 + 2 ) * sizeof( float )) )); // normals

        glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, mesh.ibo );
        glDrawElements( GL_TRIANGLES, mesh.drawSize, GL_UNSIGNED_INT, 0 );

        glDisableVertexAttribArray( 0 );
        glDisableVertexAttribArray( 1 );
        glDisableVertexAttribArray( 2 );
    }

Also, I've changed the profile to OpenGL core 4.3.

And now I'm getting these messages ( same on 3.3 ):


Message: (1396343898768): Running with OpenGL 4.3.0 NVIDIA 319.32
Error: (1396343898768): OpenGL error in 'glEnableVertexAttribArray( 0 )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 13: invalid enumerant
Error: (1396343898768): OpenGL error in 'glVertexAttribPointer( 0, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), 0 )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 18: invalid operation
Error: (1396343898768): OpenGL error in 'glVertexAttribPointer( 1, 2, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (3 * sizeof( float )) )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 19: invalid operation
Error: (1396343898768): OpenGL error in 'glVertexAttribPointer( 2, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (( 3 + 2 ) * sizeof( float )) )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 20: invalid operation

Looks like it's something with the opengl settings, yeah. But I've failed to guess what exactly.

OK, I figured that out.

That was completely my fault with SDL2 context initialization.

For some reason, GF740M and HD6670 were indifferent to SDL_GL_SetAttribute() call order, but on GF630 that spoiled everything.

The error was at the point where I created a window:


void Window::Create() {
        // init sdl2
        if ( SDL_Init(  SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS ) ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init SDL.");
            exit( -1 );
        }
        //  set opengl 3.3
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
        SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
        // create window
        Uint32 flags = SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL;
        window = SDL_CreateWindow( title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, flags );
        if ( 0 == window ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to create SDL window.");
            exit( -1 );
        }
        glContext = SDL_GL_CreateContext( window );
        // init glew
        glewExperimental = true;
        if ( GLEW_OK != glewInit() ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init GLEW." );
            exit( -1 );
        }
        HideCursor( true );
        WarpCursorXY( width / 2, height / 2 );
    }

Notice that bunch of SDL_GL_* calls ?

They must be placed !!!AFTER!!! SDL_GL_CreateContext(). i.e.


void Window::Create() {
        // init sdl2
        if ( SDL_Init(  SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS ) ) {
                Logger::WriteLog( LOG_S_ERROR, "Unable to init SDL.");
                exit( -1 );
            }
        // create window
        Uint32 flags = SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL;
        window = SDL_CreateWindow( title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, flags );
        if ( 0 == window ) {
                Logger::WriteLog( LOG_S_ERROR, "Unable to create SDL window.");
                exit( -1 );
            }
        glContext = SDL_GL_CreateContext( window );
        // init glew
        glewExperimental = true;
        if ( GLEW_OK != glewInit() ) {
                Logger::WriteLog( LOG_S_ERROR, "Unable to init GLEW." );
                exit( -1 );
            }
        // set gl attribs
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 4 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
        SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 16 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_FLAGS, SDL_GL_CONTEXT_DEBUG_FLAG );
        // set vsync
        SDL_GL_SetSwapInterval( 1 );
        // manage cursor
        HideCursor( true );
        WarpCursorXY( width / 2, height / 2 );
    }

I changed it, and everything started to work ( rechecked on GF740 - still good results ). Hey, bunny! :)

5089545.png

Thanks a lot to everyone.

Hope this will help somebody in future.

I believe now you are initializing the OpenGL context with SDL's default values, whatever they are (ie. your SDL_GL_SetAttribute() calls have no effect)

AgentC is correct, those SDL_GL_SetAttribute calls do nothing at all if placed after context creation.

This topic is closed to new replies.

Advertisement