Sign in to follow this  
Geometrian

OpenGL OpenGL Erroneous Context Version

Recommended Posts

Geometrian    1810
Hi,

I need at least an OpenGL 2.1 context, and preferably more. I am interfacing directly to the Win32 layer.

I'm setting up a context in one of two methods; neither works correctly all of the time.

Method 1:
1: Create an invisible dummy window that "hosts" the context
2: Make context with wglCreateContext
3: Set context to be current
4: Load extensions
5: Make a new (visible) window and use the context in it.

Method 1 seems to work fine for most programs using this code, and glGetString(GL_VERSION) typically indicates a 4.2 context (as high as my card supports). However, in one particular program, for some reason it instead indicates a 1.2 context and advanced functionality subsequently fails.

To try to solve this, I changed the code to implement method 2.

Method 2:
1: Create an invisible dummy window that "hosts" all contexts
2: Make a dummy context with wglCreateContext
3: Set the dummy context to be current
4: Load extensions
5: Make a new context with wglCreateContextAttribsARB having the desired properties
6: Set the dummy context to be not current and then set the new context to be current
7: Delete dummy context
8: Make a new (visible) window and use the new context in it.

Using this, I can get an OpenGL 3.1 context to work correctly (since my programs use OpenGL 2 functionality, a 3.2 context is not used). However, for that one particular program, something very odd happens. glGetString(GL_VERSION) indicates a 1.2 context, but trying to check it with this:
[source]int version[2];
glGetIntegerv(GL_MAJOR_VERSION, version );
glGetIntegerv(GL_MINOR_VERSION, version+1);
printf("OpenGL %d.%d\n",version[0],version[1]);[/source]. . . indicates the 3.1 context as requested! However, the advanced functionality still fails, so I suspect it is wrong in saying so.

It's worth noting that the code for the one particular program where both methods fail is directly copied from a program that works. For some reason, when compiled, the binaries don't hash to the same value, which means that some configuration option might be perturbing this problem into existence.

-G

Share this post


Link to post
Share on other sites
Aks9    1499

Why are you creating a dummy window?

That should be done only if you need multisampling without FBO, in order to find appropriate pixel format.

In all other cases, you should do the following:

 

1. Make a new (visible) window and set appropriate pixel format
2. Create a dummy context with wglCreateContext
3. Set the dummy context to be current
4. Create a new (GL 3.0+) context with wglCreateContextAttribsARB having the desired properties
5: Set the new context to be current
6: Delete dummy context

7: Load extensions using the new context

Share this post


Link to post
Share on other sites
mhagain    13430

Yeah, creating the context on a separate window seems weird - in Windows (at least), your OpenGL context is specific to your HDC, which in turn is specific to your window.  So even if it does work for you, you're still relying on undefined behaviour to "do the right thing".

Share this post


Link to post
Share on other sites
Brother Bob    10344

Yeah, creating the context on a separate window seems weird - in Windows (at least), your OpenGL context is specific to your HDC, which in turn is specific to your window.  So even if it does work for you, you're still relying on undefined behaviour to "do the right thing".

Check the documentation for wglMakeCurrent. It states that you can activate a context on any device context, as long as the pixel format and the device is the same.

Share this post


Link to post
Share on other sites
mhagain    13430

Yeah, creating the context on a separate window seems weird - in Windows (at least), your OpenGL context is specific to your HDC, which in turn is specific to your window.  So even if it does work for you, you're still relying on undefined behaviour to "do the right thing".

Check the documentation for wglMakeCurrent. It states that you can activate a context on any device context, as long as the pixel format and the device is the same.

 

Hmm - you're right: http://msdn.microsoft.com/en-us/library/windows/desktop/dd374387%28v=vs.85%29.aspx

 

It need not be the same hdc that was passed to wglCreateContext when hglrc was created, but it must be on the same device and have the same pixel format.

 

Can someone -1 me? :)

Share this post


Link to post
Share on other sites
Geometrian    1810

Why are you creating a dummy window?
That should be done only if you need multisampling without FBO, in order to find appropriate pixel format.

That's actually the eventual plan.

However, the real reason is to make the design cleaner. Contexts require a window to be created, but this supposes that that window will be around forever. The way I've structured the architecture is to have a context wrapper object contain its own invisible window. So, the window that "owns" the context is guaranteed to be around for as long as the life of the context. This allows the user to create and destroy windows at will without affecting the context's existence.

 

In all other cases, you should do the following:
[...]

Don't I need to load extensions before using wglCreateContextAttribsARB?

Edited by Geometrian

Share this post


Link to post
Share on other sites
Brother Bob    10344

Why are you creating a dummy window?
That should be done only if you need multisampling without FBO, in order to find appropriate pixel format.

That's actually the eventual plan.

However, the real reason is to make the design cleaner. Contexts require a window to be created, but this supposes that that window will be around forever. The way I've structured the architecture is to have a context wrapper object contain its own invisible window. So, the window that "owns" the context is guaranteed to be around for as long as the life of the context. This allows the user to create and destroy windows at will without affecting the context's existence.

Just create the context, there's no need for the hidden window there. If you want to create windows at will, then do that, and keep the context as a separate object and bind the two at some point (for example when rendering you attach the desired context to the desired window).

 

In all other cases, you should do the following:
[...]

Don't I need to load extensions before using wglCreateContextAttribsARB?

You don't need to, no, as long as the pixel format doesn't change. The function pointers returned by wglGetProcAddress are required to be the same for different contexts as long as the pixel format is the same. If you just create a dummy context in order to be able to create another context, then that's fine. If you create a dummy window also, then you have to make sure that the pixel format is the same.

Share this post


Link to post
Share on other sites
Erik Rufelt    5901

One reason for multiple windows is if you want to use wglChoosePixelFormatARB to select your pixel format, and the final pixel-format might differ from the one selected with wglChoosePixelFormat for the dummy context. A window can't change its pixel-format once set.

 

 

As for the original question, do you mean that an active context claims to be 3.1 through GL_MAJOR_VERSION/GL_MINOR_VERSION, but functions that are guaranteed to be in core 3.1 are not available?

If they are extension functions, check if they are available in the extensions string.

 

 

EDIT: If I understand correctly the problem is that, when calling glGetIntegerv(GL_MAJOR_VERSION) and glGetVersion() after each other, on the same context, they return conflicting information. Which seems strange to say the least.. can you make a minimal example and post the code?

As well as update your drivers, might be a bug.

 

 

My guess is that glGetIntegerv(GL_MAJOR_VERSION, ..) returns an error (check with glGetError()), and does not overwrite the integers in int version[2]. Therefore some old values that happen to be 3 and 1 are still there and the real version is 1.2, for which GL_MAJOR_VERSION is not supported.

Edited by Erik Rufelt

Share this post


Link to post
Share on other sites
Geometrian    1810

Just create the context, there's no need for the hidden window there. If you want to create windows at will, then do that, and keep the context as a separate object and bind the two at some point (for example when rendering you attach the desired context to the desired window).

Ummm . . . both wglCreateContext and wglCreateContextAttribsARB take a device context as an argument; I assumed that can only come from a valid window?
 

One reason for multiple windows is if you want to use wglChoosePixelFormatARB to select your pixel format, and the final pixel-format might differ from the one selected with wglChoosePixelFormat for the dummy context. A window can't change its pixel-format once set.

Right. Although that's not implemented now, that's my eventual plan.
 

As for the original question, do you mean that an active context claims to be 3.1 through GL_MAJOR_VERSION/GL_MINOR_VERSION, but functions that are guaranteed to be in core 3.1 are not available?
If they are extension functions, check if they are available in the extensions string.
 
 
EDIT: If I understand correctly the problem is that, when calling glGetIntegerv(GL_MAJOR_VERSION) and glGetVersion() after each other, on the same context, they return conflicting information. Which seems strange to say the least.. can you make a minimal example and post the code?
As well as update your drivers, might be a bug.

The point is that checking the context version returns different results. I recently found out that GL_MAJOR_VERSION/GL_MINOR_VERSION are only supported on OpenGL 3.0 or later.

Unfortunately, I can't really make a minimal sample; an identical program's source works in one project, but the same code fails when recompiled in a different project. It's very bizarre.

At any rate, what's happening is that the context somehow fails to be even OpenGL 2 compatible. It's 1.2, apparently. Since this code is currently being tested on Windows 7, I suspect that somehow it's getting the system default OpenGL instead of that provided by the graphics card vendor? I don't know why that would be though.

My guess is that glGetIntegerv(GL_MAJOR_VERSION, ..) returns an error (check with glGetError()), and does not overwrite the integers in int version[2]. Therefore some old values that happen to be 3 and 1 are still there and the real version is 1.2, for which GL_MAJOR_VERSION is not supported.

Initializing the data shows that they are being set.

Edited by Geometrian

Share this post


Link to post
Share on other sites
Geometrian    1810

After reading up a bit more, I think it is relevant to mention that the pixelformat found by both the working and nonworking programs is the same (i.e. they (should?) both be hardware accelerated).

Share this post


Link to post
Share on other sites
Erik Rufelt    5901

Unfortunately, I can't really make a minimal sample; an identical program's source works in one project, but the same code fails when recompiled in a different project. It's very bizarre.

 

Run a diff on the project files to find exactly what lines are different, then change one after another until it works.

If you make a minimal example, depending on your environment, that should be just one .cpp file (identical) and one project file (different).

Share this post


Link to post
Share on other sites
Brother Bob    10344

Just create the context, there's no need for the hidden window there. If you want to create windows at will, then do that, and keep the context as a separate object and bind the two at some point (for example when rendering you attach the desired context to the desired window).

Ummm . . . both wglCreateContext and wglCreateContextAttribsARB take a device context as an argument; I assumed that can only come from a valid window?
 

That doesn't mean the context is tied to that window in any way (it is tied to its pixel format though, so you could say there is some connection, but that only limits which contexts can be tied to which windows). In order to create a rendering context, you need a window, yes. But you can move that context around as you like, with and without a window. You don't need the context to have a hidden window, you only need a window to create it.

 

It is perfectly fine to separate the concepts of windows and rendering contexts. The window holds a window, and the rendering context holds a rendering context; no need for hidden windows anywhere for this reason. Just tie a rendering context to a window before rendering.

Share this post


Link to post
Share on other sites
Geometrian    1810

Run a diff on the project files to find exactly what lines are different, then change one after another until it works.
If you make a minimal example, depending on your environment, that should be just one .cpp file (identical) and one project file (different).

The .sln, .vcxproj, .vcxproj.filters, .vcxproj.user files are the same, except for (some) hash values and the project names. I'll see if I can perturb it into/out of existence another way.

The program that fails uses a library that uses a library that uses the library where the windowing code is defined.

 

In order to create a rendering context, you need a window, yes. But you can move that context around as you like, with and without a window. You don't need the context to have a hidden window, you only need a window to create it.

Yes. To clarify, the hidden window exists only to create the context. This hidden window is local to my context class. User windows can be created and destroyed completely independently of the context--in fact, this is exactly the point of this design.

Share this post


Link to post
Share on other sites
Geometrian    1810

I'll see if I can perturb it into/out of existence another way.

Amazingly, the differences continue to shrink. I can literally copy working project files to the same directory, rename them, add them to the solution, and the original project works while the copied one breaks.

 

I strongly suspect the hash values are magically the problem. Can anyone guess why they'd cause a weird error like this?

Share this post


Link to post
Share on other sites
Brother Bob    10344

In order to create a rendering context, you need a window, yes. But you can move that context around as you like, with and without a window. You don't need the context to have a hidden window, you only need a window to create it.

Yes. To clarify, the hidden window exists only to create the context. This hidden window is local to my context class. User windows can be created and destroyed completely independently of the context--in fact, this is exactly the point of this design.

Ok, then I apparently misunderstood you. I though the hidden window followed the context, but if you create it temporarily just for creating the context, and then destroy it immediately and forget about it as soon as the context is created, then that's a bit better. But I would use one of your primary windows instead, since that force you to actually have a real window as opposed to just a temporary throw-away window in order to have a rendering context. It also ensures that the pixel format of the rendering context is compatible with the window(s) it is supposed to be used with.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

    • By altay
      Hi folks,
      Imagine we have 8 different light sources in our scene and want dynamic shadow map for each of them. The question is how do we generate shadow maps? Do we render the scene for each to get the depth data? If so, how about performance? Do we deal with the performance issues just by applying general methods (e.g. frustum culling)?
      Thanks,
       
  • Popular Now