Jump to content

  • Log In with Google      Sign In   
  • Create Account


Strange rendering behavior on different cards.


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
18 replies to this topic

#1 syskrank   Members   -  Reputation: 169

Like
0Likes
Like

Posted 31 March 2014 - 06:25 AM

Hi everyone.

I'm facing a quite strange problem.

 

I'm initializing my OpenGL context with SDL2-2.0.3, forcing it to use the core profile ( 3.3 ), also I am using GLEW for extensions. I am using Linux Mint 16 with stock kernel 3.11.0-12-generic with proprietary nvidia drivers from the repos ( version 319.32 ).

 

I've got the same setup on two different machines:

 

Lenovo V580c laptop with GeForce 740M

and

PC with GeForce GT 630.

Also, I've got another PC with AMD Radeon HD 6670 card ( OS - Debian  + opensource driver )

 

My OpenGL application for now renders only one rotating triangle in perpective projection with FPS-like camera floating around.

 

 

The problem is that I'm not seeing anything on my GF 630 ( on the second PC ).

 

The 740M doing things just fine,  like the HD6670.  Yet, on GF630 I'm getting a blank screen with properly cleaned color buffer and depth buffer.

 

Here's my init code:

void Window::Create() {
        // init sdl2
        if ( SDL_Init(  SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS ) ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init SDL.");
            exit( -1 );
        }
        //  set opengl 3.3
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
        SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
        // create window
        Uint32 flags = SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL;
        window = SDL_CreateWindow( title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, flags );
        if ( 0 == window ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to create SDL window.");
            exit( -1 );
        }
        glContext = SDL_GL_CreateContext( window );
        // init glew
        glewExperimental = true;
        if ( GLEW_OK != glewInit() ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init GLEW." );
            exit( -1 );
        }
        HideCursor( true );
        WarpCursorXY( width / 2, height / 2 );
    }

This one inits some default states

    void RenderingEngine::InitGLDefaults(){
        glClearColor( 0.0f, 0.0f, 0.4f, 1.0f );
        glEnable( GL_DEPTH_TEST );
        glDepthFunc( GL_LESS );
        glEnable( GL_CULL_FACE );
        glCullFace( GL_BACK );
    }

This is how I render meshes:

    void RenderingEngine::RenderMesh( const Mesh& mesh ) {
        glEnableVertexAttribArray( 0 ); // vertices
        glEnableVertexAttribArray( 1 ); // texture coords
        glEnableVertexAttribArray( 2 ); // normals

        glBindBuffer( GL_ARRAY_BUFFER, mesh.vbo );
        glVertexAttribPointer( 0, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), 0 ); // vert coords
        glVertexAttribPointer( 1, 2, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (3 * sizeof( float )) ); // tex coords
        glVertexAttribPointer( 2, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (( 3 + 2 ) * sizeof( float )) ); // normals

        glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, mesh.ibo );
        glDrawElements( GL_TRIANGLES, mesh.drawSize, GL_UNSIGNED_INT, 0 );

        glDisableVertexAttribArray( 0 );
        glDisableVertexAttribArray( 1 );
        glDisableVertexAttribArray( 2 );
    }

Here is a part of a main cycle, which involves rendering:

    void Application::Render() {
        render::RenderingEngine::RenderClear();
        testShader.Bind();

        glm::mat4 modelMatrix = testTransform.GetModelMatrix();
        glm::mat4 mvp = testCamera.GetViewProjection() * modelMatrix;

        testShader.SetUniformMat4( "MVP", mvp );
        render::RenderingEngine::RenderMesh( testMesh );
        testShader.Unbind();
        SDL_GL_SwapWindow( window.window );
    }

Again: this works on 740M and HD6670, but draws a clean screen on  GT630.

 

Halp!


Edited by syskrank, 31 March 2014 - 06:26 AM.


Sponsor:

#2 Mona2000   Members   -  Reputation: 585

Like
0Likes
Like

Posted 31 March 2014 - 08:08 AM

Try creating the context in debug mode with:

SDL_GL_SetAttribute(SDL_GL_CONTEXT_FLAGS, SDL_GL_CONTEXT_DEBUG_FLAG)

and then enabling OpenGL debug output.



#3 JohnnyCode   Members   -  Reputation: 207

Like
0Likes
Like

Posted 31 March 2014 - 09:40 AM

you do not clear depth buffer in posted code. Perhaps different defaults in memory of depth buffer between ati and nvidia allows for one to pass depth test while one doesn't. Anyway, try to isolate what you can, first only do a clear on both cards, than disable all tests (alpha, depth, stencil) and so on.



#4 Juliean   GDNet+   -  Reputation: 2387

Like
0Likes
Like

Posted 31 March 2014 - 11:16 AM

Since the glDebugMessageCallback​-stuff is rather new and probably not widely supported (I see core in 4.3 when you are using 3.3) it might be well advised to wrap all your opengl-calls with a simple macro:

void CheckOpenGLError(const char* stmt, const char* fname, int line)
{
	GLenum err = glGetError();
	if(err != GL_NO_ERROR)
	{
              // handle error here
	}
}

#ifdef _DEBUG
#define GL_CHECK(stmt) { \
	stmt; \
	CheckOpenGLError(#stmt, __FILE__, __LINE__); \
		}
#else
#define GL_CHECK(stmt) stmt
#endif

Which you can then use:

GL_CHECK(glEnableVertexAttribArray( 0 ));

and it will tell you if something is wrong in debug (compile) mode. You do need to wrap it around EVERY function that you have, or else uncaught error codes will leak to other parts of the program and cause another function to report the error instead. To end with a generic OpenGL-rant: Thats why OpenGLs global error handling and/or global state b... stuff sucks. glDebugMessageCallback is a nice step in the right direction, but for now I would recommend to use both of those side by side.

 

EDIT: Just rechecked, you might want to keep the macro around for even faster debuggin anyway, since at least on my two PCs there is no way to get the exact line of error from the glDebugMessageCallback, e.g. throwing a exception thrown is kept inside the GPU-driver dlls. With the macro you can assert, throw an exception, set a breakpoint etc.. and be fine. Repeated, use both of them side by side, as its supported in 4.2 I assume it could also work for 3.3, even though the specification tells otherwise.


Edited by Juliean, 31 March 2014 - 11:43 AM.


#5 Mona2000   Members   -  Reputation: 585

Like
0Likes
Like

Posted 31 March 2014 - 01:10 PM

EDIT: Just rechecked, you might want to keep the macro around for even faster debuggin anyway, since at least on my two PCs there is no way to get the exact line of error from the glDebugMessageCallback, e.g. throwing a exception thrown is kept inside the GPU-driver dlls.

glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);


#6 Stainless   Members   -  Reputation: 833

Like
0Likes
Like

Posted 31 March 2014 - 06:59 PM

When I come across things like this I start going through all the opengl settings I can think of.

 

I find it is usually a default setting problem. (sadly not always, but it's a good place to start)

 

For example the depth buffer may be disabled by default on one machine, and enabled by default on another.

 

Textures could be disabled on one and enabled on another

 

Culling, alpha blending, draw colour, etc. can all end up giving you a blank screen. You are setting a couple of them, but I don't see you bind a texture, I don't see you enabling texture 2D.

 

I see you passing in texture coords though.

 

After that you have to look at your shader, do the obvious things (check it's compiling, check it's linking, check the drivers are up to date,....)



#7 syskrank   Members   -  Reputation: 169

Like
0Likes
Like

Posted 01 April 2014 - 03:29 AM

Wow, thanks a lot for the replies :) Didn't expect that at all after silent sdl2 forums :)

 

 

you do not clear depth buffer in posted code. Perhaps different defaults in memory of depth buffer between ati and nvidia allows for one to pass depth test while one doesn't. Anyway, try to isolate what you can, first only do a clear on both cards, than disable all tests (alpha, depth, stencil) and so on.

I do, I swear :) Rechecked everything, I am calling to glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) before every frame. So that's ok.

Also, I'm getting sane rendering results on both nvidia and amd cards, there is an another nvidia card, that causing problems ( as I said before ).

 

Like this:

GF740M - OK

HD6770 - OK

GF630 - ERROR

 

 


Which you can then use:

GL_CHECK(glEnableVertexAttribArray( 0 ));

 

Well, thanks a lot for debugging method suggestion. I've implemented this and integrated it with my logging system.

 

My rendering method now looks like:

    // TODO: fix rendering on GF630
    void RenderingEngine::RenderMesh( const Mesh& mesh ) {
        GL_CHECK(glEnableVertexAttribArray( 0 )); // vertices
        GL_CHECK(glEnableVertexAttribArray( 1 )); // texture coords
        GL_CHECK(glEnableVertexAttribArray( 2 )); // normals

        GL_CHECK(glBindBuffer( GL_ARRAY_BUFFER, mesh.vbo ));
        GL_CHECK(glVertexAttribPointer( 0, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), 0 )); // vert coords
        GL_CHECK(glVertexAttribPointer( 1, 2, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (3 * sizeof( float )) )); // tex coords
        GL_CHECK(glVertexAttribPointer( 2, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (( 3 + 2 ) * sizeof( float )) )); // normals

        glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, mesh.ibo );
        glDrawElements( GL_TRIANGLES, mesh.drawSize, GL_UNSIGNED_INT, 0 );

        glDisableVertexAttribArray( 0 );
        glDisableVertexAttribArray( 1 );
        glDisableVertexAttribArray( 2 );
    }

Also, I've changed the profile to OpenGL core 4.3.

And now I'm getting these messages ( same on 3.3 ):

Message: (1396343898768): Running with OpenGL 4.3.0 NVIDIA 319.32
Error: (1396343898768): OpenGL error in 'glEnableVertexAttribArray( 0 )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 13: invalid enumerant
Error: (1396343898768): OpenGL error in 'glVertexAttribPointer( 0, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), 0 )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 18: invalid operation
Error: (1396343898768): OpenGL error in 'glVertexAttribPointer( 1, 2, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (3 * sizeof( float )) )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 19: invalid operation
Error: (1396343898768): OpenGL error in 'glVertexAttribPointer( 2, 3, GL_FLOAT, false, VERTEX_COMPONENTS * sizeof( float ), (char*) (( 3 + 2 ) * sizeof( float )) )'-'/home/user/projects/my3d/src/render/RenderingEngine.cpp': 20: invalid operation

Looks like it's something with the opengl settings, yeah. But I've failed to guess what exactly.



#8 syskrank   Members   -  Reputation: 169

Like
0Likes
Like

Posted 01 April 2014 - 07:44 AM

OK, I figured that out.

That was completely my fault with SDL2 context initialization.

For some reason, GF740M and HD6670 were indifferent to SDL_GL_SetAttribute() call order, but on GF630 that spoiled everything.

 

The error was at the point where I created a window:

void Window::Create() {
        // init sdl2
        if ( SDL_Init(  SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS ) ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init SDL.");
            exit( -1 );
        }
        //  set opengl 3.3
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
        SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
        // create window
        Uint32 flags = SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL;
        window = SDL_CreateWindow( title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, flags );
        if ( 0 == window ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to create SDL window.");
            exit( -1 );
        }
        glContext = SDL_GL_CreateContext( window );
        // init glew
        glewExperimental = true;
        if ( GLEW_OK != glewInit() ) {
            Logger::WriteLog( LOG_S_ERROR, "Unable to init GLEW." );
            exit( -1 );
        }
        HideCursor( true );
        WarpCursorXY( width / 2, height / 2 );
    }

Notice that bunch of SDL_GL_* calls ?

They must be placed !!!AFTER!!! SDL_GL_CreateContext(). i.e.

void Window::Create() {
        // init sdl2
        if ( SDL_Init(  SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS ) ) {
                Logger::WriteLog( LOG_S_ERROR, "Unable to init SDL.");
                exit( -1 );
            }
        // create window
        Uint32 flags = SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL;
        window = SDL_CreateWindow( title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, flags );
        if ( 0 == window ) {
                Logger::WriteLog( LOG_S_ERROR, "Unable to create SDL window.");
                exit( -1 );
            }
        glContext = SDL_GL_CreateContext( window );
        // init glew
        glewExperimental = true;
        if ( GLEW_OK != glewInit() ) {
                Logger::WriteLog( LOG_S_ERROR, "Unable to init GLEW." );
                exit( -1 );
            }
        // set gl attribs
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 4 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );
        SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
        SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 16 );
        SDL_GL_SetAttribute( SDL_GL_CONTEXT_FLAGS, SDL_GL_CONTEXT_DEBUG_FLAG );
        // set vsync
        SDL_GL_SetSwapInterval( 1 );
        // manage cursor
        HideCursor( true );
        WarpCursorXY( width / 2, height / 2 );
    }

I changed it, and everything started to work ( rechecked on GF740 - still good results ). Hey, bunny! :)

 

5089545.png

 

Thanks a lot to everyone.

 

Hope this will help somebody in future.



#9 AgentC   Members   -  Reputation: 1269

Like
0Likes
Like

Posted 01 April 2014 - 08:29 AM

I believe now you are initializing the OpenGL context with SDL's default values, whatever they are (ie. your SDL_GL_SetAttribute() calls have no effect)


Every time you add a boolean member variable, God kills a kitten. Every time you create a Manager class, God kills a kitten. Every time you create a Singleton...

Urho3D (engine)  Hessian (C64 game project)


#10 Mona2000   Members   -  Reputation: 585

Like
0Likes
Like

Posted 01 April 2014 - 08:38 AM

AgentC is correct, those SDL_GL_SetAttribute calls do nothing at all if placed after context creation.


Edited by Mona2000, 01 April 2014 - 08:39 AM.


#11 syskrank   Members   -  Reputation: 169

Like
0Likes
Like

Posted 01 April 2014 - 12:55 PM

Wow, that's strange, didn't read docs enough. If I couldn't find the faulty one, then it will be better to completely remove them.



#12 Promit   Moderators   -  Reputation: 6349

Like
0Likes
Like

Posted 01 April 2014 - 01:43 PM

I just want to point out that glGetError actually causes significant performance overhead, because it triggers pipeline flushes. In my case I found it advantageous to set a separate define to call it after every GL call and only enable that functionality periodically as a check or when something went wrong.



#13 syskrank   Members   -  Reputation: 169

Like
0Likes
Like

Posted 02 April 2014 - 01:05 AM

Hello again, everyone.

 


I believe now you are initializing the OpenGL context with SDL's default values, whatever they are (ie. your SDL_GL_SetAttribute() calls have no effect)

 

Well, it seems to be like this.

But what are these defaults?

 

I've placed

SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 4 );
SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3 );

before window creation again, and that's causing errors to appear.

 

When I remove these lines, all works OK, logger reports that system is started with OpenGL 4.3 on nvidia driver v 319.32.

 

My card supports OpenGL 4.3, thats for sure. Driver supports that card, so the issue is within SDL 2 then ?



#14 NineYearCycle   Members   -  Reputation: 912

Like
0Likes
Like

Posted 04 April 2014 - 02:53 AM

Have you tried requesting lower versions? 3.3 or 3.1 etc.

 

It certainly looks like you're doing everything correctly and it's how we do it on our projects.


"Ars longa, vita brevis, occasio praeceps, experimentum periculosum, iudicium difficile"

"Life is short, [the] craft long, opportunity fleeting, experiment treacherous, judgement difficult."


#15 syskrank   Members   -  Reputation: 169

Like
0Likes
Like

Posted 05 April 2014 - 01:47 PM

Yes, I tried. And, unfortunately, I've ended up with the same issues.  On GF630 every GL version request fails with 'invalid enumerant'/'invalid operation'.

Without version request it initializes GL 4.3 ( as max of what this GPU is capable of ) and everything seems to be fine.

 

Maybe it's something like this exact driver working weird with  this exact model of GPU card ?

I have the same driver on the laptop with GF740 and it's all ok there with SDL_GL_SetAttribute( SDL_GL_CONTEXT* ) calls.

 

Will try another driver on the next week and post here the results.



#16 syskrank   Members   -  Reputation: 169

Like
0Likes
Like

Posted 14 April 2014 - 02:35 AM

Well, I've tried to specify the OpenGL version on GF630 with the new driver [ sorry, took more than a week to get to this point ] :

Message: (1397464013393): Running with OpenGL 4.4.0 NVIDIA 331.67

... and now I'm getting the same strange error strings just like before. I don't know now whom to fight:

 

Linux OS ( 2/3 PCs works great ),

SDL2 ( weird things on the inside ??? )

or my own code ( seems more obvious, but I'm doing the standard init routine, and again - it works out on 2 of 3 PCs ),

or even the nVIDIA card/drivers  ???

 

Maybe I can go in some hardcore way with some #ifdefs and platform-dependent code, using WinAPI for Win32 and X11 window init for *nix systems. That should cut off the SDL2 part away.

But is it worth it ?



#17 Styves   Members   -  Reputation: 983

Like
0Likes
Like

Posted 14 April 2014 - 03:28 AM

Which version of the GF630 do you have? Only the Kepler version supports GL 4.3, the others only support up to 4.2 (according to NVIDIA specs). I don't think much will come from specifying it with the new driver as I'm thinking it's a hardware limitation. So with that, I'd say try switching it to 4.0-4.2 instead of 4.3 and see if it works then.

 

 

If you suspect SDL, then you can try switching to GLFW and see if the problem persists. :)



#18 syskrank   Members   -  Reputation: 169

Like
0Likes
Like

Posted 15 April 2014 - 08:10 AM

It's seems that hardware is OK - by default the 4.4 profile is initialized ( it is when version demands are commented-out from the code ) - my app gets these numbers from the OpenGL version string.

 

And I was trying to get a 3.3 core in the first part.

 

Well, the migration to the GLFW is still possible, but I feel the input system there a bit tricky + it seems that GLFW is limiting itself to a 60 FPS.

I'll give it another try in order to find the guilty one :)



#19 Styves   Members   -  Reputation: 983

Like
0Likes
Like

Posted 15 April 2014 - 02:43 PM

You can disable VSync in GLFW with glfwSwapInterval(0);

 

:)






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS