Sign in to follow this  
c_olin

OpenGL Using SDL for OpenGL context is slower than GLFW

Recommended Posts

I've been cleaning up my code a bit making sure that if I were to switch graphics APIs it wouldn't be a huge deal. So I decided to switch to SDL for context creation and input since I was using GLFW.

Unfortunately I experienced a severe performance degradation when using SDL. FPS dropped from 180-200 (GLFW) to 40-70 (SDL) and the framerate is choppy and less continuous. I'm not too familiar with OpenGL context management (hence why I'm using GLFW and SDL) and I found little information regarding performance between SDL and GLFW so I decided to ask here.

Here is the GLFW code:
[code]

if (!glfwOpenWindow(width, height, 8, 8, 8, 8, bytesPerPixel * 8, 0,
fullscreen ? GLFW_FULLSCREEN : GLFW_WINDOW)) {
throw Exception("Failed to create window.");
}

glfwDisable(GLFW_MOUSE_CURSOR);

glfwSetWindowTitle(getName().c_str());

glfwSetKeyCallback(keyCallback);
glfwSetMousePosCallback(mousePosCallback);
glfwSetMouseButtonCallback(mouseButtonCallback);

setMouseOrigin(Vector<2, int>(width/2, height/2));
if (verticleSync) {
glfwSwapInterval(1);
} else {
glfwSwapInterval(0);
}
[/code]



And the SDL code:
[code]

int videoFlags;
const SDL_VideoInfo *videoInfo;

videoInfo = SDL_GetVideoInfo();

if (!videoInfo) {
String error(SDL_GetError());
SDL_Quit();

throw Exception("Failed: " + error);
}

videoFlags = SDL_OPENGL;
videoFlags |= SDL_GL_DOUBLEBUFFER;
videoFlags |= SDL_HWPALETTE;

SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 16 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 0 );
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );

if (videoInfo->hw_available) {
videoFlags |= SDL_HWSURFACE;
} else {
videoFlags |= SDL_SWSURFACE;
}

if (videoInfo->blit_hw) {
videoFlags |= SDL_HWACCEL;
}

SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);

if (!SDL_SetVideoMode(width, height, bytesPerPixel * 8, videoFlags)) {
String error(SDL_GetError());
SDL_Quit();

throw Exception("Failed: " + error);
}

SDL_ShowCursor(0);
[/code]



Note that my renderer uses deferred shading and therefore heavily relies on frame buffer objects. I am currently developing on Windows 7. Any help is appreciated.

Share this post


Link to post
Share on other sites
are you relying on the libraries' own 2D drawing stuff?

if so performance penalty might be because SDL's internal 2D drawing doesn't rely on OpenGL.

Share this post


Link to post
Share on other sites
Can you post a minimal program reproducing this behaviour?

Your SDL code is quite unusual, when dealing with OpenGL you generally just pass SDL_OPENGL (possibly with SDL_FULLSCREEN) to SDL_SetVideoMode(). Most of the other flags you are setting probably shouldn't be used.

Share this post


Link to post
Share on other sites
My engine isnt using FBO's yet, but when I did a quick speedtest before starting I found SDL to be MUCH faster that GLFW

Share this post


Link to post
Share on other sites
You could try taking a look at your SDL event snippet of your code. When I first started using SDL, I made the mistake of not specifying all the events that I wanted to have actually be taken for the program. Instead, it checked all events, keyboard, mouse, OS specific, joypad, etc. Optimizing the code for only the events I wanted, it reduced resource usage dramatically.

Share this post


Link to post
Share on other sites
Thanks for the quick replies everyone.

[quote name='Yours3!f' timestamp='1306695891' post='4817204']
are you relying on the libraries' own 2D drawing stuff?

if so performance penalty might be because SDL's internal 2D drawing doesn't rely on OpenGL.
[/quote]

No.

[quote name='rip-off' timestamp='1306697697' post='4817214']
Can you post a minimal program reproducing this behaviour?
[/quote]

That is quite impossible as my engine is fairly complex.

[quote name='rip-off' timestamp='1306697697' post='4817214']
Can you post a minimal program reproducing this behaviour?

Your SDL code is quite unusual, when dealing with OpenGL you generally just pass SDL_OPENGL (possibly with SDL_FULLSCREEN) to SDL_SetVideoMode(). Most of the other flags you are setting probably shouldn't be used.
[/quote]

I tried getting rid of everything else and just calling SDL_SetVideoMode with SDL_OPENGL and saw no observable difference. At least the code is considerably shorter.

[quote name='Ninjaboi' timestamp='1306706219' post='4817254']
You could try taking a look at your SDL event snippet of your code. When I first started using SDL, I made the mistake of not specifying all the events that I wanted to have actually be taken for the program. Instead, it checked all events, keyboard, mouse, OS specific, joypad, etc. Optimizing the code for only the events I wanted, it reduced resource usage dramatically.
[/quote]

This might be on the right track. I added an event filter and got smoother framerate but it is still hovering around 70 which is concerning. Here is my event code. It is just translating keyboard and mouse events to internally represented events and put into the queue.

[code]

int filter(SDL_Event* event) {
switch(event->type) {
case SDL_KEYDOWN:
case SDL_KEYUP:
case SDL_MOUSEMOTION:
case SDL_MOUSEBUTTONDOWN:
case SDL_MOUSEBUTTONUP:
case SDL_QUIT:
return 1;
default:
return 0;
}
}

SDL_SetEventFilter((SDL_EventFilter)&filter); // Called on window creation.


void SDLWindow::swapBuffers() {
SDL_GL_SwapBuffers();

SDL_Event event;
Vector<2, int> mouseOrigin = getMouseOrigin();

while (SDL_PollEvent(&event)) {
switch(event.type) {
case SDL_KEYDOWN:
pushInputEvent(InputEvent(InputEventType::KeyDown, mouseOrigin, (Key::Enum)(int)event.key.keysym.sym));
break;
case SDL_KEYUP:
pushInputEvent(InputEvent(InputEventType::KeyUp, mouseOrigin, (Key::Enum)(int)event.key.keysym.sym));
break;
case SDL_MOUSEMOTION:
pushInputEvent(InputEvent(InputEventType::MouseMove, Vector<2, int>((int)event.motion.x, (int)event.motion.y)));
break;
case SDL_MOUSEBUTTONDOWN:
pushInputEvent(InputEvent(InputEventType::KeyDown, mouseOrigin, (Key::Enum)(int)event.button.button));
break;
case SDL_MOUSEBUTTONUP:
pushInputEvent(InputEvent(InputEventType::KeyUp, mouseOrigin, (Key::Enum)(int)event.button.button));
break;
case SDL_QUIT:
break;
default:
break;
}
}
}
[/code]


Share this post


Link to post
Share on other sites
Not quite sure if this will work, but you could try removing the default case in that event switch statement. If I recall, that was the resource-hungry checker in my code in one of my first projects using SDL. I believe the theory I had behind it was "I told it to check these events, but I'm guessing that if none of the events that I've specified are the event that has occurred, then default must be being called for every single event that's possible excluding the ones that I've already specified in my cases.". Sorry if that doesn't make much sense, I'm in a room of very loud people, and it's hard to think when your head is pounding :cool:.

Tell us if that works ( cross your fingers! ).

EDIT: You might also check for your games refresh timer/loop. See if you have already limited your frame rate ( or just the amount of times your main loop is refreshed ). I usually keep mine at 20 for small 2D games, and 60 for everything else.

Share this post


Link to post
Share on other sites
Thanks for the reply.

[quote name='Ninjaboi' timestamp='1306713773' post='4817285']
Not quite sure if this will work, but you could try removing the default case in that event switch statement. If I recall, that was the resource-hungry checker in my code in one of my first projects using SDL. I believe the theory I had behind it was "I told it to check these events, but I'm guessing that if none of the events that I've specified are the event that has occurred, then default must be being called for every single event that's possible excluding the ones that I've already specified in my cases.". Sorry if that doesn't make much sense, I'm in a room of very loud people, and it's hard to think when your head is pounding :cool:.

Tell us if that works ( cross your fingers! ).
[/quote]

I tried this and it had no effect. Which makes sense because with my event filter in place I am only receiving 0-3 events per frame.

[quote name='Ninjaboi' timestamp='1306713773' post='4817285']
EDIT: You might also check for your games refresh timer/loop. See if you have already limited your frame rate ( or just the amount of times your main loop is refreshed ). I usually keep mine at 20 for small 2D games, and 60 for everything else.
[/quote]

My main-loop should not having anything to do with it since when I use GLFW I am not having this problem.

Share this post


Link to post
Share on other sites
With that kind of framerate drop, you're doing something wrong.

First thing is - as always with framerates in that kind of region - check for vsync.

I note that you're asking for a 16-bit depth buffer. Double check what you actually get (SDL_GL_GetAttribute) and also double-check that you're not getting stencil as well. It's common enough (not widespread but I've seen it happen a few times) for OpenGL context creation to give you stencil even if you didn't ask for it (or asked for 0 bits), and if so, you should be clearing stencil at the same time as you clear depth. That will only account for a ~10% to ~20% perf drop, but it's still significant enough.

If you're doing an SDL_Sleep at the end of each frame, then stop doing it now. SDL's timer is quite coarse with poor resolution, and SDL_Sleep guarantees a minimum sleep time, not a maximum or exact. You may be sleeping for a lot longer than you think you are.

Any reason for the SDL_HWPALETTE? You're not trying to use OpenGL in color index mode are you? Take it out back and shoot it, you might be getting some weird pixel format that's dropping stuff to software emulation. While you're at it, drop your startup flags to the bare minimum. Rip out everything that's not needed - start with what was suggested above and only add in what you actually need to support your program.

So start with that, see how you get on, and report back. ;)

Share this post


Link to post
Share on other sites
Thanks for the reply. The SDL init code is a copy paste from the old NEHE tutorials. I did try removing all flags but SDL_OPENGL and I got the same results. Vsync is turned off. I don't use the stencil buffer but I don't really care if there is one or not. In GLFW I tried using 8 bits-per-pixel, 24 bit depth, and 24 bit stencil and it worked great. I tried the exact same options in SDL and got the same frame-rate drop and stuttering:

[code]

SDL_SetEventFilter((SDL_EventFilter)&filter);

SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 24);

SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, 0);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);

if (!SDL_SetVideoMode(width, height, bytesPerPixel * 8, SDL_OPENGL)) {
String error(SDL_GetError());
SDL_Quit();

throw Exception("Failed: " + error);
}

SDL_ShowCursor(0);
[/code]

I have tried messing with the values and commenting out lines and nothing seems to affect my results. My OpenGL code does not generate any OpenGL errors and I have run it with gDEbugger and removed all deprecated, redundant, and erroneous calls. I really have no idea how I could be having such dramatically differing results. I am considering digging through SDL and GLFW to see how the native code differs between the two.


Share this post


Link to post
Share on other sites
You don't need to post your entire engine. Just the basic OpenGL/SDL initialisation code, the event code and a drawing loop that does nothing. Enough to demonstrate the problem, nothing more.

Share this post


Link to post
Share on other sites
[quote name='rip-off' timestamp='1306763172' post='4817498']
You don't need to post your entire engine. Just the basic OpenGL/SDL initialisation code, the event code and a drawing loop that does nothing. Enough to demonstrate the problem, nothing more.
[/quote]



I see. I have shown all of my SDL code and my OpenGL code is fairly minimal (just VBOs, FBOs, Textures, and shaders). However I'm not sure if the change in frame-rate would be apparent with no drawing. In fact I'm pretty sure it will not be observable as my main menu GUI is not considerably slower with SDL, only when the in-game scene is rendered which is rather complex.

Thanks for the reply; I appreciate the help. I'm considering sticking with GLFW for awhile since graphics API independence is not super important right now and perhaps as my engine matures this issue will disappear as it seems that the slow down may not be directly related to SDL.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627749
    • Total Posts
      2978913
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now