glClear(depth) + texturing = slow
(OpenGL, 2D)
For some reason, when I have texturing enabled, clearing the _depth buffer_ becomes a major job causing the whole application to run extremely slow. It doesn't matter if I turn texturing on/off each frame or turn it on at the start and leave it on. I'm rendering ONE textured sprite to the screen in windowed mode!
I'm digging up some old code here so I don't know all the internals but I've been profiling the app and have been able to figure out that the call to glClear runs extremely slow in these cases (not the sprite rendering code though). Any ideas on what could cause this?
Thanks
Are you also requesting a stencil buffer in your pixel format? if so, clear depth AND stencil and see how it goes.
This sounds like you are hitting a software path which is making it so slow. If you also have a stencil buffer you should clear that together with the depth buffer as this should be faster, but I doubt it's the problem. If that would be the problem simply clearing the screen in a loop without ever calling any other OpenGL functions should also be slow.
And how did you conclude that it's glClear which makes it slow? OpenGL functions can generally be asynchronous which means that a function could take up 0 time to return and letting the gfx card work in the background.
And how did you conclude that it's glClear which makes it slow? OpenGL functions can generally be asynchronous which means that a function could take up 0 time to return and letting the gfx card work in the background.
Thanks,
No I'm not requesting a stencil buffer.. Well the profiler results were pretty clear, my clear task with just the call to glClear was the most time-consuming of all tasks and commenting out the depth buffer bit flag in the call makes all the difference in the world, too. Putting the glClear somewhere else doesn't work either.
I guess I could figure out how to live without ever using depth testing but it would make things so much easier to have it.
What do you mean by hitting a software path? What mystifies me is that commenting out a single call to glEnable(texturing) solves the problem but leaves me without.. texturing!
No I'm not requesting a stencil buffer.. Well the profiler results were pretty clear, my clear task with just the call to glClear was the most time-consuming of all tasks and commenting out the depth buffer bit flag in the call makes all the difference in the world, too. Putting the glClear somewhere else doesn't work either.
I guess I could figure out how to live without ever using depth testing but it would make things so much easier to have it.
What do you mean by hitting a software path? What mystifies me is that commenting out a single call to glEnable(texturing) solves the problem but leaves me without.. texturing!
Graphics card ? Chipset ? GL_VENDOR string ?
Also, define "extremely slow". 'Slow' as in 5000fps to 1000fps (which would be completely normal), or 200fps to 1fps (which would be a software fallback) ?
Also, define "extremely slow". 'Slow' as in 5000fps to 1000fps (which would be completely normal), or 200fps to 1fps (which would be a software fallback) ?
I'm running this simple app on a machine with a new Radeon card, gl vendor = ATI, extremely slow = below 5fps. I've tried with a depth buffer bit setting at 8, 16, 24. glGetError just returns no_error..
Why would the depth testing fall back to software on a new gfx card? How can I be sure it really is and why?
Thanks
[Edited by - Saandman on December 9, 2007 2:51:49 PM]
Why would the depth testing fall back to software on a new gfx card? How can I be sure it really is and why?
Thanks
[Edited by - Saandman on December 9, 2007 2:51:49 PM]
pixelFormat.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pixelFormat.nVersion = 1;
pixelFormat.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pixelFormat.iPixelType = PFD_TYPE_RGBA;
pixelFormat.cColorBits = 32;
pixelFormat.cDepthBits = 24;
pixelFormat.nVersion = 1;
pixelFormat.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pixelFormat.iPixelType = PFD_TYPE_RGBA;
pixelFormat.cColorBits = 32;
pixelFormat.cDepthBits = 24;
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement