Creating OpenGL context in different OpenGL driver versions

Started by
8 comments, last by BitMaster 7 years, 9 months ago

Hi, I've read in some StackOverflow and other websites that using wglCreateContext is obsolete. I don't know if this is correct since ALL OpenGL examples I've seen make use of this method.

I've always created my OpenGL context this way, by calling wglCreateContext and wglMakeCurrent.

Is this correct?

Do this changes in different OpenGL versions?

Is the same to create an OpenGL context in 1.x, 2.x, 3.x and 4.x ?

I'm talking about doing it raw, not using extensions. Will this always be the same for any OpenGL context, ignoring the version?

Advertisement

Here's a modern day OpenGL context creation on Windows. It's a goddamn mess, because you have to create a context to ask what kind of context you can create. But that's how it's set up. You will probably want to read the documentation page for WGL_ARB_create_context for all the gross details.


	//Create a basic OpenGL context
	g_hRC = wglCreateContext(g_hDC);
	if(!g_hRC)
	{
		GlobalLogger->Write(LP_Critical, "Failed to create any OpenGL context at all.");
		KillGraphics();
		return false;
	}

	wglMakeCurrent(g_hDC, g_hRC);

	//Attempt to create a new GL 3.0+ context
	int flags = 0;
	int profile = WGL_CONTEXT_CORE_PROFILE_BIT_ARB;
	if(ap_OpenGLForwardCompat)
		flags |= WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB;
	if(ap_GraphicsDebug)
		flags |= WGL_CONTEXT_DEBUG_BIT_ARB;
	if(ap_OpenGLCompat)
		profile = WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB;
	glewInit();
	if(wglewIsSupported("WGL_ARB_create_context"))
	{
		int attribs[] =
		{
			WGL_CONTEXT_MAJOR_VERSION_ARB, ap_OpenGLMajorVersion,
			WGL_CONTEXT_MINOR_VERSION_ARB, ap_OpenGLMinorVersion,
			WGL_CONTEXT_FLAGS_ARB, flags,
			WGL_CONTEXT_PROFILE_MASK_ARB, profile,
			0
		};

		//New context is available, spool it up and destroy the old one
		wglMakeCurrent(NULL, NULL);
		wglDeleteContext(g_hRC);
		g_hRC = wglCreateContextAttribsARB(g_hDC, 0, attribs);
		if(!g_hRC)
		{
			logger.WriteFormat(LP_Critical, "Failed to create OpenGL %d.%d context.", ap_OpenGLMajorVersion, ap_OpenGLMinorVersion);
			KillGraphics();
			return false;
		}

		wglMakeCurrent(g_hDC, g_hRC);

		glGetIntegerv(GL_MAJOR_VERSION, &ap_OpenGLMajorVersion);
		glGetIntegerv(GL_MINOR_VERSION, &ap_OpenGLMinorVersion);
		logger.WriteFormat(LP_Info, "Created OpenGL context version: %d.%d%s", ap_OpenGLMajorVersion, ap_OpenGLMinorVersion, 
			ap_OpenGLCompat ? " (compatibility)" : "");
	}
	else
	{
		logger.WriteFormat(LP_Warning, "Using OpenGL legacy context version: %s", glGetString(GL_VERSION));
	}
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

What Promit wants to say is that there was a point where GL developers thought of different versions or "Profiles" so you were asked to specify such a profile with different settings you may want and driver responses with that what you could get.

So for example specify 4.5 version in profile struct but driver responses with 4.1 version then this is what you could get.

Same is true for if you want a core profile where depricated functions got out of scope or compatibility one where you still could use glBegin, glEnd ...

To answer your question, if you are aware of using extension functions then you could still keep the old fashioned way, otherwise you need to create an old context first and then update that by the given profile, this works and I use this in my engine too (but strictly focus on core profile only) The reason is that if you specify a specific version driver is going to load that version function code into process memory and then you could access that versions extensions first.

You may try that on the break between 4.4 and 4.5 where named functions were invovled in 4.5 you could only load with 4.5 profile but wouldnt get a function pointer for in 4.4 version profile

So for example specify 4.5 version in profile struct but driver responses with 4.1 version then this is what you could get.

That's not legal behavior. The spec is very clear that the implementation is not allowed to respond with a version less than requested in the attribs submitted to wglCreateContextAttribsARB.

Same is true for if you want a core profile where depricated functions got out of scope or compatibility one where you still could use glBegin, glEnd ...

If you ask for a core profile, the implementation is required to give you a core profile. Incidentally, a core profile has a slight negative performance impact on NVIDIA and probably on the other major manufacturers too. It's not recommended to use it unless you have a concrete reason.

The reason is that if you specify a specific version driver is going to load that version function code into process memory and then you could access that versions extensions first.

No driver I've ever seen works that way. The driver is a single monolithic implementation which supports some particular version of OpenGL. It will load in its entirety and respond with that version of OpenGL no matter what you request during context creation. (Mac might be a bit different since it only has a few versions, I can't remember now.) It wouldn't be practical to slice the versions into separate libraries and defer loads, as that would create more problems without any benefits.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

Here's a modern day OpenGL context creation on Windows. It's a goddamn mess, because you have to create a context to ask what kind of context you can create. But that's how it's set up. You will probably want to read the documentation page for WGL_ARB_create_context for all the gross details.


	//Create a basic OpenGL context
	g_hRC = wglCreateContext(g_hDC);
	if(!g_hRC)
	{
		GlobalLogger->Write(LP_Critical, "Failed to create any OpenGL context at all.");
		KillGraphics();
		return false;
	}

	wglMakeCurrent(g_hDC, g_hRC);

	//Attempt to create a new GL 3.0+ context
	int flags = 0;
	int profile = WGL_CONTEXT_CORE_PROFILE_BIT_ARB;
	if(ap_OpenGLForwardCompat)
		flags |= WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB;
	if(ap_GraphicsDebug)
		flags |= WGL_CONTEXT_DEBUG_BIT_ARB;
	if(ap_OpenGLCompat)
		profile = WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB;
	glewInit();
	if(wglewIsSupported("WGL_ARB_create_context"))
	{
		int attribs[] =
		{
			WGL_CONTEXT_MAJOR_VERSION_ARB, ap_OpenGLMajorVersion,
			WGL_CONTEXT_MINOR_VERSION_ARB, ap_OpenGLMinorVersion,
			WGL_CONTEXT_FLAGS_ARB, flags,
			WGL_CONTEXT_PROFILE_MASK_ARB, profile,
			0
		};

		//New context is available, spool it up and destroy the old one
		wglMakeCurrent(NULL, NULL);
		wglDeleteContext(g_hRC);
		g_hRC = wglCreateContextAttribsARB(g_hDC, 0, attribs);
		if(!g_hRC)
		{
			logger.WriteFormat(LP_Critical, "Failed to create OpenGL %d.%d context.", ap_OpenGLMajorVersion, ap_OpenGLMinorVersion);
			KillGraphics();
			return false;
		}

		wglMakeCurrent(g_hDC, g_hRC);

		glGetIntegerv(GL_MAJOR_VERSION, &ap_OpenGLMajorVersion);
		glGetIntegerv(GL_MINOR_VERSION, &ap_OpenGLMinorVersion);
		logger.WriteFormat(LP_Info, "Created OpenGL context version: %d.%d%s", ap_OpenGLMajorVersion, ap_OpenGLMinorVersion, 
			ap_OpenGLCompat ? " (compatibility)" : "");
	}
	else
	{
		logger.WriteFormat(LP_Warning, "Using OpenGL legacy context version: %s", glGetString(GL_VERSION));
	}

Jaja wow, never thought OpenGL would become a mess. I doubt why they didn't add a different function in every new driver release to specifically create that context and just send the attributes as a parameter but anyways, seems I'll need to study it a bit to find a workaround. Thanks promit! :)

Do yourself a favor and use GLFW, or SDL to handle contexts. No one will look down on you for not learning all that platform specific stuff, and if they do, they're idiots, so you shouldn't listen to them anyway.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Do yourself a favor and use GLFW, or SDL to handle contexts. No one will look down on you for not learning all that platform specific stuff, and if they do, they're idiots, so you shouldn't listen to them anyway.

Well, I don't really care if someone knows I can handle buffers, memory, handles, pointers, etc. Actually your answer is out of context here, I would appreciate if you first learn to read the post at the top.

I'm not learning OpenGL as a developer, which most people do, I'm learning it as a PROGRAMMER. I don't like developing things but programming. It's easy for a developer to do: "frameworkInit();" and let the framework/machine do the dirty job. Well, that's what a developer do, and as a developer you will never know to solve simple life problems and you will get stuck in a bad designed software. I have the mentality that a good software is the one that is capable to reach almost any user as possible. To achieve this I have to know what's going on down there.

Obviously, developers don't care about this, they just use the tools they have in hand to finish something quickly and that works and at the end you get a very heavy/bad performance software and end up posting really high specs to use it. My mind says this is NOT how this has to be, but most of the people, like you think: "don't waste you time (I have to mention I'm a student) learning complex/difficult things, use frameworks". Well yeah, I must recognize this is how small/middle software companies work but we have to emphasize there are different kinds of programmers. Obviously, if the person(s) who wrote SDL and GLFW would read your comment they wouldn't feel happy with your response lol

This is not if people who tries to get very underground to understand how and why something happens in the software are "idiots".
This is about what they can learn and do after understanding this.

You think as a developer, I think as a programmer, that's the difference, don't mix it and call other "idiots" cause they don't do what you do.
I have my way of doing things, you have yours. That depends on what you are focusing to do, I'm focusing on different things lol

Do yourself a favor and use GLFW, or SDL to handle contexts. No one will look down on you for not learning all that platform specific stuff, and if they do, they're idiots, so you shouldn't listen to them anyway.


Well, I don't really care if someone knows I can handle buffers, memory, handles, pointers, etc. Actually your answer is out of context here, I would appreciate if you first learn to read the post at the top.


His answer is exactly in context here. Both GLFW and SDL have one core feature: create a window and initialize an OpenGL context for that window. SDL does come with some added utility on top of that but the core here is really the context creation (and window management). I have done it by hand. There is really no magic to it, it's just a huge chore. Especially once multiple platforms come in, sticking to the best practices of each platform and creating a valid context there is just one huge annoyance.

There are tons of interesting areas where you can have a lot of fun going into the nitty gritty details and doing things by hand and from the ground up. Wasting the limited time you have with stuff like window management when you do not need anything above the well-tried and exhaustively tested solutions does not feel like a smart move.

You also seem to be drawing some wild personalized, arbitrary distinction between what you think a developer is and what a programmer is. Personally I would not do that but if I absolutely had to, the roles would be reversed. A programmer would be someone coding to specifications using whatever library are required or allowed by the specifications. A software developer would deal with (or at least be qualified to do it) that but also with the steps coming before, including research and planning.

If you ask for a core profile, the implementation is required to give you a core profile. Incidentally, a core profile has a slight negative performance impact on NVIDIA and probably on the other major manufacturers too. It's not recommended to use it unless you have a concrete reason.

That may be unecessary. When your worry about a few micro seconds then you should fix bottlenecks inside your code rather than claim about NVIDIAs core profile checks
.I didnt recognized any negative side effects be it preformance hit or other stuff when developing OpenGL for the last 8 years on both, NVIDIA and AMD cards in a wide range of hardware versions.

Do you have any benchmarks about that?

No driver I've ever seen works that way. The driver is a single monolithic implementation which supports some particular version of OpenGL. It will load in its entirety and respond with that version of OpenGL no matter what you request during context creation

Its not the driver but the windows gl implementation service. Try it yourself by querying any function pointer not in the profile you asked for. You wont get them so they werent loaded at all inside the program space so couldnt optained

No driver I've ever seen works that way. The driver is a single monolithic implementation which supports some particular version of OpenGL. It will load in its entirety and respond with that version of OpenGL no matter what you request during context creation


Its not the driver but the windows gl implementation service. Try it yourself by querying any function pointer not in the profile you asked for. You wont get them so they werent loaded at all inside the program space so couldnt optained


That's not conclusive of anything. It would be far more likely the driver just won't return any pointers for extensions outside the current profile.

This topic is closed to new replies.

Advertisement