• Advertisement
Sign in to follow this  

OpenGL OpenGL Erroneous Context Version

This topic is 1728 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

I need at least an OpenGL 2.1 context, and preferably more. I am interfacing directly to the Win32 layer.

I'm setting up a context in one of two methods; neither works correctly all of the time.

Method 1:
1: Create an invisible dummy window that "hosts" the context
2: Make context with wglCreateContext
3: Set context to be current
4: Load extensions
5: Make a new (visible) window and use the context in it.

Method 1 seems to work fine for most programs using this code, and glGetString(GL_VERSION) typically indicates a 4.2 context (as high as my card supports). However, in one particular program, for some reason it instead indicates a 1.2 context and advanced functionality subsequently fails.

To try to solve this, I changed the code to implement method 2.

Method 2:
1: Create an invisible dummy window that "hosts" all contexts
2: Make a dummy context with wglCreateContext
3: Set the dummy context to be current
4: Load extensions
5: Make a new context with wglCreateContextAttribsARB having the desired properties
6: Set the dummy context to be not current and then set the new context to be current
7: Delete dummy context
8: Make a new (visible) window and use the new context in it.

Using this, I can get an OpenGL 3.1 context to work correctly (since my programs use OpenGL 2 functionality, a 3.2 context is not used). However, for that one particular program, something very odd happens. glGetString(GL_VERSION) indicates a 1.2 context, but trying to check it with this:
[source]int version[2];
glGetIntegerv(GL_MAJOR_VERSION, version );
glGetIntegerv(GL_MINOR_VERSION, version+1);
printf("OpenGL %d.%d\n",version[0],version[1]);[/source]. . . indicates the 3.1 context as requested! However, the advanced functionality still fails, so I suspect it is wrong in saying so.

It's worth noting that the code for the one particular program where both methods fail is directly copied from a program that works. For some reason, when compiled, the binaries don't hash to the same value, which means that some configuration option might be perturbing this problem into existence.

-G

Share this post


Link to post
Share on other sites
Advertisement

Why are you creating a dummy window?

That should be done only if you need multisampling without FBO, in order to find appropriate pixel format.

In all other cases, you should do the following:

 

1. Make a new (visible) window and set appropriate pixel format
2. Create a dummy context with wglCreateContext
3. Set the dummy context to be current
4. Create a new (GL 3.0+) context with wglCreateContextAttribsARB having the desired properties
5: Set the new context to be current
6: Delete dummy context

7: Load extensions using the new context

Share this post


Link to post
Share on other sites

Yeah, creating the context on a separate window seems weird - in Windows (at least), your OpenGL context is specific to your HDC, which in turn is specific to your window.  So even if it does work for you, you're still relying on undefined behaviour to "do the right thing".

Share this post


Link to post
Share on other sites

Yeah, creating the context on a separate window seems weird - in Windows (at least), your OpenGL context is specific to your HDC, which in turn is specific to your window.  So even if it does work for you, you're still relying on undefined behaviour to "do the right thing".

Check the documentation for wglMakeCurrent. It states that you can activate a context on any device context, as long as the pixel format and the device is the same.

Share this post


Link to post
Share on other sites

Yeah, creating the context on a separate window seems weird - in Windows (at least), your OpenGL context is specific to your HDC, which in turn is specific to your window.  So even if it does work for you, you're still relying on undefined behaviour to "do the right thing".

Check the documentation for wglMakeCurrent. It states that you can activate a context on any device context, as long as the pixel format and the device is the same.

 

Hmm - you're right: http://msdn.microsoft.com/en-us/library/windows/desktop/dd374387%28v=vs.85%29.aspx

 

It need not be the same hdc that was passed to wglCreateContext when hglrc was created, but it must be on the same device and have the same pixel format.

 

Can someone -1 me? :)

Share this post


Link to post
Share on other sites

Why are you creating a dummy window?
That should be done only if you need multisampling without FBO, in order to find appropriate pixel format.

That's actually the eventual plan.

However, the real reason is to make the design cleaner. Contexts require a window to be created, but this supposes that that window will be around forever. The way I've structured the architecture is to have a context wrapper object contain its own invisible window. So, the window that "owns" the context is guaranteed to be around for as long as the life of the context. This allows the user to create and destroy windows at will without affecting the context's existence.

 

In all other cases, you should do the following:
[...]

Don't I need to load extensions before using wglCreateContextAttribsARB?

Edited by Geometrian

Share this post


Link to post
Share on other sites

Why are you creating a dummy window?
That should be done only if you need multisampling without FBO, in order to find appropriate pixel format.

That's actually the eventual plan.

However, the real reason is to make the design cleaner. Contexts require a window to be created, but this supposes that that window will be around forever. The way I've structured the architecture is to have a context wrapper object contain its own invisible window. So, the window that "owns" the context is guaranteed to be around for as long as the life of the context. This allows the user to create and destroy windows at will without affecting the context's existence.

Just create the context, there's no need for the hidden window there. If you want to create windows at will, then do that, and keep the context as a separate object and bind the two at some point (for example when rendering you attach the desired context to the desired window).

 

In all other cases, you should do the following:
[...]

Don't I need to load extensions before using wglCreateContextAttribsARB?

You don't need to, no, as long as the pixel format doesn't change. The function pointers returned by wglGetProcAddress are required to be the same for different contexts as long as the pixel format is the same. If you just create a dummy context in order to be able to create another context, then that's fine. If you create a dummy window also, then you have to make sure that the pixel format is the same.

Share this post


Link to post
Share on other sites

One reason for multiple windows is if you want to use wglChoosePixelFormatARB to select your pixel format, and the final pixel-format might differ from the one selected with wglChoosePixelFormat for the dummy context. A window can't change its pixel-format once set.

 

 

As for the original question, do you mean that an active context claims to be 3.1 through GL_MAJOR_VERSION/GL_MINOR_VERSION, but functions that are guaranteed to be in core 3.1 are not available?

If they are extension functions, check if they are available in the extensions string.

 

 

EDIT: If I understand correctly the problem is that, when calling glGetIntegerv(GL_MAJOR_VERSION) and glGetVersion() after each other, on the same context, they return conflicting information. Which seems strange to say the least.. can you make a minimal example and post the code?

As well as update your drivers, might be a bug.

 

 

My guess is that glGetIntegerv(GL_MAJOR_VERSION, ..) returns an error (check with glGetError()), and does not overwrite the integers in int version[2]. Therefore some old values that happen to be 3 and 1 are still there and the real version is 1.2, for which GL_MAJOR_VERSION is not supported.

Edited by Erik Rufelt

Share this post


Link to post
Share on other sites

Just create the context, there's no need for the hidden window there. If you want to create windows at will, then do that, and keep the context as a separate object and bind the two at some point (for example when rendering you attach the desired context to the desired window).

Ummm . . . both wglCreateContext and wglCreateContextAttribsARB take a device context as an argument; I assumed that can only come from a valid window?
 

One reason for multiple windows is if you want to use wglChoosePixelFormatARB to select your pixel format, and the final pixel-format might differ from the one selected with wglChoosePixelFormat for the dummy context. A window can't change its pixel-format once set.

Right. Although that's not implemented now, that's my eventual plan.
 

As for the original question, do you mean that an active context claims to be 3.1 through GL_MAJOR_VERSION/GL_MINOR_VERSION, but functions that are guaranteed to be in core 3.1 are not available?
If they are extension functions, check if they are available in the extensions string.
 
 
EDIT: If I understand correctly the problem is that, when calling glGetIntegerv(GL_MAJOR_VERSION) and glGetVersion() after each other, on the same context, they return conflicting information. Which seems strange to say the least.. can you make a minimal example and post the code?
As well as update your drivers, might be a bug.

The point is that checking the context version returns different results. I recently found out that GL_MAJOR_VERSION/GL_MINOR_VERSION are only supported on OpenGL 3.0 or later.

Unfortunately, I can't really make a minimal sample; an identical program's source works in one project, but the same code fails when recompiled in a different project. It's very bizarre.

At any rate, what's happening is that the context somehow fails to be even OpenGL 2 compatible. It's 1.2, apparently. Since this code is currently being tested on Windows 7, I suspect that somehow it's getting the system default OpenGL instead of that provided by the graphics card vendor? I don't know why that would be though.

My guess is that glGetIntegerv(GL_MAJOR_VERSION, ..) returns an error (check with glGetError()), and does not overwrite the integers in int version[2]. Therefore some old values that happen to be 3 and 1 are still there and the real version is 1.2, for which GL_MAJOR_VERSION is not supported.

Initializing the data shows that they are being set.

Edited by Geometrian

Share this post


Link to post
Share on other sites

Unfortunately, I can't really make a minimal sample; an identical program's source works in one project, but the same code fails when recompiled in a different project. It's very bizarre.

 

Run a diff on the project files to find exactly what lines are different, then change one after another until it works.

If you make a minimal example, depending on your environment, that should be just one .cpp file (identical) and one project file (different).

Share this post


Link to post
Share on other sites

Just create the context, there's no need for the hidden window there. If you want to create windows at will, then do that, and keep the context as a separate object and bind the two at some point (for example when rendering you attach the desired context to the desired window).

Ummm . . . both wglCreateContext and wglCreateContextAttribsARB take a device context as an argument; I assumed that can only come from a valid window?
 

That doesn't mean the context is tied to that window in any way (it is tied to its pixel format though, so you could say there is some connection, but that only limits which contexts can be tied to which windows). In order to create a rendering context, you need a window, yes. But you can move that context around as you like, with and without a window. You don't need the context to have a hidden window, you only need a window to create it.

 

It is perfectly fine to separate the concepts of windows and rendering contexts. The window holds a window, and the rendering context holds a rendering context; no need for hidden windows anywhere for this reason. Just tie a rendering context to a window before rendering.

Share this post


Link to post
Share on other sites

Run a diff on the project files to find exactly what lines are different, then change one after another until it works.
If you make a minimal example, depending on your environment, that should be just one .cpp file (identical) and one project file (different).

The .sln, .vcxproj, .vcxproj.filters, .vcxproj.user files are the same, except for (some) hash values and the project names. I'll see if I can perturb it into/out of existence another way.

The program that fails uses a library that uses a library that uses the library where the windowing code is defined.

 

In order to create a rendering context, you need a window, yes. But you can move that context around as you like, with and without a window. You don't need the context to have a hidden window, you only need a window to create it.

Yes. To clarify, the hidden window exists only to create the context. This hidden window is local to my context class. User windows can be created and destroyed completely independently of the context--in fact, this is exactly the point of this design.

Share this post


Link to post
Share on other sites

I'll see if I can perturb it into/out of existence another way.

Amazingly, the differences continue to shrink. I can literally copy working project files to the same directory, rename them, add them to the solution, and the original project works while the copied one breaks.

 

I strongly suspect the hash values are magically the problem. Can anyone guess why they'd cause a weird error like this?

Share this post


Link to post
Share on other sites

In order to create a rendering context, you need a window, yes. But you can move that context around as you like, with and without a window. You don't need the context to have a hidden window, you only need a window to create it.

Yes. To clarify, the hidden window exists only to create the context. This hidden window is local to my context class. User windows can be created and destroyed completely independently of the context--in fact, this is exactly the point of this design.

Ok, then I apparently misunderstood you. I though the hidden window followed the context, but if you create it temporarily just for creating the context, and then destroy it immediately and forget about it as soon as the context is created, then that's a bit better. But I would use one of your primary windows instead, since that force you to actually have a real window as opposed to just a temporary throw-away window in order to have a rendering context. It also ensures that the pixel format of the rendering context is compatible with the window(s) it is supposed to be used with.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Advertisement
  • Similar Content

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
  • Popular Now

  • Advertisement