Sign in to follow this  
Oogst

OpenGL Crash in glGenFramebuffersEXT for some users

Recommended Posts

Our game Awesomenauts released last week for Windows and for a small number of users the game turns out to crash when post-effects are enabled. It works fine without. All of these users have Nvidia videocards, but from various generations: from 8800 to 560. Most users with the exact same videocards don't get these crashes. I asked those with crashes to update their drivers, and that does not help.

Now the interesting thing is that creating the first rendertexture does not crash, and the [i]second[/i] rendertexture the game creates, crashes. It crashes in glGenFramebuffersEXT (or maybe glGenTextures, not entirely sure, but I think it is glGenFramebuffersEXT). These rendertextures are created directly after each other, with no relevant code in-between. So it basically looks like this (removed some irrelevant code in-between):

[code]
{
unsigned int frameBufferID;
unsigned int textureID;
glGenFramebuffersEXT(1, &frameBufferID);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
GLint filterMode = GL_LINEAR;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, filterMode);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, filterMode);
GLint frameBufferFormat = GL_RGBA;
glTexImage2D(GL_TEXTURE_2D, 0, frameBufferFormat, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frameBufferID);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, textureID, 0);
}
{
unsigned int frameBufferID;
unsigned int textureID;
glGenFramebuffersEXT(1, &frameBufferID); // << CRASH!
glGenTextures(1, &textureID); // << OR MAYBE THE CRASH IS HERE
glBindTexture(GL_TEXTURE_2D, textureID);
GLint filterMode = GL_LINEAR;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, filterMode);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, filterMode);
GLint frameBufferFormat = GL_RGBA;
glTexImage2D(GL_TEXTURE_2D, 0, frameBufferFormat, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frameBufferID);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, textureID, 0);
}[/code]

Some random pieces of information:
-Crashes are on Windows
-Crashes only happen on Nvidia videocards
-Happens for less than 1% of our users (which is quite a few computers for a game that is selling well on Steam)
-We use SDL 1.2.13
-Our engine is multi-threaded, and all OpenGL code happens in one thread

What could cause this to crash on only some rare computers, and what can I do to fix this?

Share this post


Link to post
Share on other sites
[quote name='Oogst' timestamp='1344511296' post='4967722']
I asked those with crashes to update their drivers, and that does not help.
[/quote]
Try to get a dxdiag report and take a look at the driver version yourself.

Else your code looks ok to me. Maybe it is just a result of an previous error. I would add a lot of error checking code at the suspecious location and send the pimped exe to one of the players with this problem to track it down.

Share this post


Link to post
Share on other sites
[quote name='Ashaman73' timestamp='1344513695' post='4967742']
Try to get a dxdiag report and take a look at the driver version yourself.[/quote]
I already have this:
GL_VENDOR: NVIDIA Corporation
GL_RENDERER GeForce 9800 GT/PCIe/SSE2
GL_VERSION 3.3.0
What other information would be interesting to check in the dxdiag report?

[quote]Else your code looks ok to me. Maybe it is just a result of an previous error. I would add a lot of error checking code at the suspecious location and send the pimped exe to one of the players with this problem to track it down.[/quote]
I call glGetError() and cgGetError() regularly already, and they don't print anything at all. I added logging to find the exact line where the crash happens. What other things could I log that are interesting?

Share this post


Link to post
Share on other sites
[quote name='Oogst' timestamp='1344514720' post='4967746']
GL_VERSION 3.3.0
[/quote]
This is the supported gl version, but not the driver version. Current driver version for nvidia should be above 300. Early driver version, which supported ogl 3.3.0, might be buggier than later one which still support the same ogl version.

[quote name='Oogst' timestamp='1344514720' post='4967746']
I call glGetError() and cgGetError() regularly already,
[/quote]
Btw. this kills performance (at least decrease it due to flushing the command queue). Nevertheless, there are special error checking functions avaible for the framebuffer (glCheckFramebufferStatusEXT), try to check these one after creating the first framebuffer.

Share this post


Link to post
Share on other sites
I'd suggest using the core (i.e non-EXT) version of the FBO API where it's available, which it should be on all 3.3 hardware. A best guess here is that you're using some other functionality which doesn't interact well with the -EXT version on these cards.

Share this post


Link to post
Share on other sites
I tried using the non-EXT versions, but it still crashes. [img]http://public.gamedev.net//public/style_emoticons/default/sad.png[/img]

Strangely, I still had to call them EXT, is that correct? For some reason I can request glGenFramebuffers and it works fine, but I can only put it in a PFNGLGENFRAMEBUFFERSEXTPROC pointer. It seems like PFNGLGENFRAMEBUFFERSPROC does not exist?! Here's how I get the functions related to the framebuffer:

[code]PFNGLGENFRAMEBUFFERSEXTPROC glGenFramebuffers = PFNGLGENFRAMEBUFFERSEXTPROC(wglGetProcAddress("glGenFramebuffers"));
if (glGenFramebuffers == NULL)
{
glGenFramebuffers = PFNGLGENFRAMEBUFFERSEXTPROC(wglGetProcAddress("glGenFramebuffersEXT"));
}[/code]

I also had to still use GL_FRAMEBUFFER_EXT instead of GL_FRAMEBUFFER. Is that okay, or am I doing something wrong?

So using glGenFramebuffers instead of glGenFramebuffersEXT does work on my own computer and behaves the exact same as just using the EXT versions, but on the computers where it used to crash, it still crashes. Edited by Oogst

Share this post


Link to post
Share on other sites
Have you considered using an extension wrangler (like GLEW) instead of doing that yourself? Right now it is not clear if the problem happens because you are setting up the extensions wrong or because something with OpenGL does not work as intended.

Share this post


Link to post
Share on other sites
I second the notion of using GLEW, instead of retrieving the function addresses yourself.
Also note that the name of your function variable is just that, a variable name. Changing it will not change the behaviour of your program. The call to wglGetProcAddress("glGenFramebuffersEXT") determines whether you retrieving the EXT version of the function or not. I am not sure whether there is actually any difference between the two since glGenFramebuffers moved to the Core profile though.

Share this post


Link to post
Share on other sites
I didn't know about GLEW before it was brought up in this topic, and I really don't want to switch these kinds of libraries on an already released game with tens of thousands of users: the risk is just too big...

What could I be doing wrong in retrieving these functions that GLEW would fix? I think my own code is exactly as the standard advocates, or am I overlooking something here?

Anyway, since it works for 99% of users, I don't think the way the extensions are retrieved would be the problem, right? Is it not more probably something with the use of rendertextures itself?

Share this post


Link to post
Share on other sites
Well, for a start you are missing several constants, starting with GL_FRAMEBUFFER. I'm not sure if GL_FRAMEBUFFER_EXT works instead. Then you have to query all the required functions yourself and make extremely sure that their signature matches exactly. Do glGenFramebuffersEXT and glGenFramebuffers have the exact same signature and calling convention? Probably, but I'd hate to make sure of that myself for every function I need to use.

Even if you do not want to use GLEW in general (though I would strongly advocate using GLEW or an equivalent well-tested extension wrangler), you could still build a test version which uses GLEW instead of your own code and send it to the people who currently have the crashing problem.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627754
    • Total Posts
      2978947
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now