Sign in to follow this  
Tocs1001

OpenGL Creating an OpenGL context on Windows with Glew.

Recommended Posts

Something weird has happened. I had context creation working just fine. It worked on my desktop and my laptop and I was happily plugging along doing graphics programming. One day I attempt to build my code on my laptop again, suddenly context creation starts to fail. I don't know what has changed.

 

My original context creation code looks like this

https://gist.github.com/LordTocs/f227528a729986df9643

 

It's sloppy and has next to no error handling but it at least worked. It still works on my desktop and fails on my laptop.

 

Specifically wglCreateContextAttribsARB fails.

 

I started to modify the code in an attempt to figure out what was wrong. I added some "GetLastError()" print outs in hopes I was doing something silly and it would tell me what was wrong. Using "FormatMessage()" to change error code into readable stings.

 

Instead of a usable error I was greeted with GetLastError() returning 3221692565. Which FormatMessage() had no idea what to do with. A quick cursory internet search lead me to a single result on the opengl forums. Which didn't yield any results

 

After some reading I was told not to create a forward compatible context. And that I should use wglChoosePixelFormatARB to get the appropriate pixel format. Thinking this was the issue, I tried to use this function, it didn't help.

 

So now I'm left with this code that doesn't work and I'm very confused.

void DisplayWindowsError()
{
	LPVOID lpMsgBuf;
	DWORD dw = GetLastError();

	FormatMessage(
		FORMAT_MESSAGE_ALLOCATE_BUFFER |
		FORMAT_MESSAGE_FROM_SYSTEM |
		FORMAT_MESSAGE_IGNORE_INSERTS,
		NULL,
		dw,
		MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),
		(LPTSTR)&lpMsgBuf,
		0, NULL);

	if (lpMsgBuf)
	{
		std::cout << "Windows Error:" << dw << ": " << (char *)lpMsgBuf << std::endl;
		LocalFree(lpMsgBuf);
	}
	else
	{
		std::cout << "Windows Error:" << dw << ": unknown" <<  std::endl;
	}
}

GraphicsContext::GraphicsContext(ContextTarget &target)
	: Target (target)
{
	PIXELFORMATDESCRIPTOR pfd =			// pfd Tells Windows How We Want Things To Be
	{
		sizeof(PIXELFORMATDESCRIPTOR),	// Size Of This Pixel Format Descriptor
		1,								// Version Number
		PFD_DRAW_TO_WINDOW |			// Format Must Support Window
		PFD_SUPPORT_OPENGL |			// Format Must Support OpenGL
		PFD_DOUBLEBUFFER,				// Must Support Double Buffering
		PFD_TYPE_RGBA,					// Request An RGBA Format
		32,								// Select Our Color Depth
		0, 0, 0, 0, 0, 0,				// Color Bits Ignored
		0,								// No Alpha Buffer
		0,								// Shift Bit Ignored
		0,								// No Accumulation Buffer
		0, 0, 0, 0,						// Accumulation Bits Ignored
		24,								// 32Bit Z-Buffer (Depth Buffer)
		8,								// No Stencil Buffer
		0,								// No Auxiliary Buffer
		PFD_MAIN_PLANE,					// Main Drawing Layer
		0,								// Reserved
		0, 0, 0								// Layer Masks Ignored
	};
	PixelFormat = 1;
	if (!(PixelFormat = ChoosePixelFormat (target.GetHDC (), &pfd)))
	{
		DisplayWindowsError();
		cout << "Failed to choose pixel format." << endl;
	}

	if (!SetPixelFormat(target.GetHDC(),PixelFormat, &pfd))
	{
		//DestroyGameWindow (); //Insert Error
		DisplayWindowsError();
		cout << "Failed to set pixel format." << endl;
	}

	HGLRC temp;
	temp = wglCreateContext(target.GetHDC());
	if (!temp)
	{
		//DestroyGameWindow (); //Insert Error
		cout << "Failed to create context" << endl;
		
	}
	DisplayWindowsError();
	if (!wglMakeCurrent(target.GetHDC (), temp))
	{
		//DestroyGameWindow (); 
		cout << "Failed to make current." << endl;
		GLErrorCheck();
	}
	DisplayWindowsError();

	GLenum err = glewInit();

	if (err != GLEW_OK)
	{
		char *error = (char *)glewGetErrorString(err);
		cout << "GLEW INIT FAIL: " << error << endl;
	}

	int contextattribs [] = 
	{
		WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
		WGL_CONTEXT_MINOR_VERSION_ARB, 2,
#ifdef _DEBUG
		WGL_CONTEXT_FLAGS_ARB,  WGL_CONTEXT_DEBUG_BIT_ARB,
#endif
		0
	};

	int pfattribs[] =
	{
		WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
		WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
		WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
		WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
		WGL_COLOR_BITS_ARB, 32,
		WGL_DEPTH_BITS_ARB, 24,
		WGL_STENCIL_BITS_ARB, 8,
		0
	};

	if (wglewIsSupported ("WGL_ARB_create_context") == 1)
	{
		unsigned int formatcount;

		if (!wglChoosePixelFormatARB(target.GetHDC(), pfattribs, nullptr, 1, (int *)&PixelFormat, &formatcount))
		{
			std::cout << "Failed to find a matching pixel format" << std::endl;
			DisplayWindowsError();
		}

		if (!SetPixelFormat(target.GetHDC(), PixelFormat, &pfd))
		{
			DisplayWindowsError();
			std::cout << "Failed to set pixelformat" << std::endl;
		}


		hRC = wglCreateContextAttribsARB(Target.GetHDC(), nullptr, contextattribs);
		if (!hRC)
		{
			DisplayWindowsError();
			std::cout << "Failed to create context." << std::endl;
		}
		wglMakeCurrent(nullptr, nullptr);
		DisplayWindowsError();
		wglDeleteContext(temp);
		DisplayWindowsError();
		GLErrorCheck();
		MakeCurrent ();
		
	}
	else
	{
		cout << "Failed to create context again..." << endl;
	}

#ifdef _DEBUG
	glEnable(GL_DEBUG_OUTPUT);
	glDebugMessageCallback(dbgcallback, nullptr);
#endif


	char *shadeversion = (char *)glGetString (GL_SHADING_LANGUAGE_VERSION);
	//GLErrorCheck;
	char *version = (char *)glGetString(GL_VERSION);
	//GLErrorCheck;
	std::cout << "Version: " << version << std::endl << "Shading Version: " << shadeversion << std::endl;

	glViewport (0,0,Target.GetWidth (), Target.GetHeight ());
	GLErrorCheck ();


	SetClearColor (Color(0,0,0,0));
	SetClearDepth(1000.0f);
	//EnableDepthBuffering ();
	//DisableDepthTest ();
	NormalBlending ();

	

	glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST); //Doesn't get Abstracted
	GLErrorCheck();



	//glLoadIdentity ();
}

Gist mirror https://gist.github.com/LordTocs/9266d8c8f7e3eb9a498e

 

If anyone knows what I'm doing wrong I'd love to know.

Thanks.

 

 

 

 

 

 

Share this post


Link to post
Share on other sites

Using the code from your first link try

 

    int attribs [] =
    {
        WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
        WGL_CONTEXT_MINOR_VERSION_ARB, 2,
        WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
#ifdef _DEBUG
        WGL_CONTEXT_FLAGS_ARBWGL_CONTEXT_FORWARD_COMPATIBLE_BIT | WGL_CONTEXT_DEBUG_BIT_ARB,
#else
        WGL_CONTEXT_FLAGS_ARBWGL_CONTEXT_FORWARD_COMPATIBLE_BIT,
#endif
        0
    };
 
I think "WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB," is needed for versions greater then 3.2

Share this post


Link to post
Share on other sites

Hmm I placed it into the "contextattribs []" array in the new code. Still getting the same result. Thanks for the help.

Share this post


Link to post
Share on other sites

I suppose there isn't really a great reason to not use GLFW. Other than I do my own window creation as well. I have a little UI framework that the context creation hooks into as well. I suppose I could fiddle with GLFW to get it all to work in harmony. But I've been using my own context creation for like a year now, it only recently decided to die on my laptop. Is there any reason I can't use my own context creation? It's part "not invented here" syndrome and part curiosity as to what I'm doing wrong. Perhaps I'll look at GLFW's source for hints.

Share this post


Link to post
Share on other sites

I noticed 3221692565 is 0xC0072095. According to the NVidia create context spec 0x2095 is ERROR_INVALID_VERSION_ARB. So I bumped the version down to 3.3 and it successfully creates a context. Which raises more questions because I should be able to create a 4.2 context. I need a 4.2 context because of shader_storage_buffers and other things.

 

TXFvMiR.png

 

cQfKgGK.png

Share this post


Link to post
Share on other sites

Your driver reports OpenGL 4.3, *not* 4.2. Try creating a 4.3 context and see what happens.

 

OpenGL drivers are notoriously picky about version specifications - on my MacBook the drivers require I specify the exact version supported by the driver, not any previous major/minor version.

Edited by swiftcoder

Share this post


Link to post
Share on other sites

Interesting thought. Alas, 4.3 fails as well.

 

Here's where the code's at now.


void DisplayWindowsError()
{
	LPVOID lpMsgBuf;
	DWORD dw = GetLastError();

	FormatMessage(
		FORMAT_MESSAGE_ALLOCATE_BUFFER |
		FORMAT_MESSAGE_FROM_SYSTEM |
		FORMAT_MESSAGE_IGNORE_INSERTS,
		NULL,
		dw,
		MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),
		(LPTSTR)&lpMsgBuf,
		0, NULL);

	if (lpMsgBuf)
	{
		std::cout << "Windows Error:" << dw << ": " << (char *)lpMsgBuf << std::endl;
		LocalFree(lpMsgBuf);
	}
	else
	{
		std::cout << "Windows Error:" << dw << ": unknown " << std::hex << dw <<  std::endl;
	}
}

GraphicsContext::GraphicsContext(ContextTarget &target)
	: Target (target)
{
	PIXELFORMATDESCRIPTOR pfd =			// pfd Tells Windows How We Want Things To Be
	{
		sizeof(PIXELFORMATDESCRIPTOR),	// Size Of This Pixel Format Descriptor
		1,								// Version Number
		PFD_DRAW_TO_WINDOW |			// Format Must Support Window
		PFD_SUPPORT_OPENGL |			// Format Must Support OpenGL
		PFD_DOUBLEBUFFER,				// Must Support Double Buffering
		PFD_TYPE_RGBA,					// Request An RGBA Format
		32,								// Select Our Color Depth
		0, 0, 0, 0, 0, 0,				// Color Bits Ignored
		0,								// No Alpha Buffer
		0,								// Shift Bit Ignored
		0,								// No Accumulation Buffer
		0, 0, 0, 0,						// Accumulation Bits Ignored
		24,								// 32Bit Z-Buffer (Depth Buffer)
		8,								// No Stencil Buffer
		0,								// No Auxiliary Buffer
		PFD_MAIN_PLANE,					// Main Drawing Layer
		0,								// Reserved
		0, 0, 0								// Layer Masks Ignored
	};
	PixelFormat = 1;
	if (!(PixelFormat = ChoosePixelFormat (target.GetHDC (), &pfd)))
	{
		DisplayWindowsError();
		cout << "Failed to choose pixel format." << endl;
	}

	if (!SetPixelFormat(target.GetHDC(),PixelFormat, &pfd))
	{
		//DestroyGameWindow (); //Insert Error
		DisplayWindowsError();
		cout << "Failed to set pixel format." << endl;
	}

	HGLRC temp;
	temp = wglCreateContext(target.GetHDC());
	if (!temp)
	{
		//DestroyGameWindow (); //Insert Error
		cout << "Failed to create context" << endl;
		
	}
	DisplayWindowsError();
	if (!wglMakeCurrent(target.GetHDC (), temp))
	{
		//DestroyGameWindow (); 
		cout << "Failed to make current." << endl;
		GLErrorCheck();
	}
	DisplayWindowsError();

	GLenum err = glewInit();

	if (err != GLEW_OK)
	{
		char *error = (char *)glewGetErrorString(err);
		cout << "GLEW INIT FAIL: " << error << endl;
	}

	int contextattribs [] = 
	{
		WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
		WGL_CONTEXT_MINOR_VERSION_ARB, 3,
		WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
#ifdef _DEBUG
		WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_DEBUG_BIT_ARB,
#endif
		0
	};

	int pfattribs[] =
	{
		WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
		WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
		WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
		WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
		WGL_COLOR_BITS_ARB, 32,
		WGL_DEPTH_BITS_ARB, 24,
		WGL_STENCIL_BITS_ARB, 8,
		//WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
		0
	};

	if (wglewIsSupported ("WGL_ARB_create_context") == 1)
	{
		unsigned int formatcount;

		if (!wglChoosePixelFormatARB(target.GetHDC(), pfattribs, nullptr, 1, (int *)&PixelFormat, &formatcount))
		{
			std::cout << "Failed to find a matching pixel format" << std::endl;
			DisplayWindowsError();
		}

		if (!SetPixelFormat(target.GetHDC(), PixelFormat, &pfd))
		{
			DisplayWindowsError();
			std::cout << "Failed to set pixelformat" << std::endl;
		}


		hRC = wglCreateContextAttribsARB(Target.GetHDC(), nullptr, contextattribs);
		if (!hRC)
		{
			DisplayWindowsError();
			std::cout << "Failed to create context." << std::endl;
		}
		wglMakeCurrent(nullptr, nullptr);
		DisplayWindowsError();
		wglDeleteContext(temp);
		DisplayWindowsError();
		GLErrorCheck();
		MakeCurrent ();
		
	}
	else
	{
		cout << "Failed to create context again..." << endl;
	}

#ifdef _DEBUG
	glEnable(GL_DEBUG_OUTPUT);
	glDebugMessageCallback(dbgcallback, nullptr);
#endif


	char *shadeversion = (char *)glGetString (GL_SHADING_LANGUAGE_VERSION);
	//GLErrorCheck;
	char *version = (char *)glGetString(GL_VERSION);
	//GLErrorCheck;
	std::cout << "Version: " << version << std::endl << "Shading Version: " << shadeversion << std::endl;

	glViewport (0,0,Target.GetWidth (), Target.GetHeight ());
	GLErrorCheck ();


	SetClearColor (Color(0,0,0,0));
	SetClearDepth(1000.0f);
	//EnableDepthBuffering ();
	//DisableDepthTest ();
	NormalBlending ();

	

	glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST); //Doesn't get Abstracted
	GLErrorCheck();



	//glLoadIdentity ();
}
 
Outputs:
Windows Error:0: The operation completed successfully.
Windows Error:0: The operation completed successfully.
Windows Error:3221692565: unknown c0072095
Failed to create context.
 
Maybe my drivers need updating or something...

Share this post


Link to post
Share on other sites

Setting the major version to 1 and the minor version to 0 in the context attrbute list gives you the latest/highest version supported by the driver at least for compatibility profile context.


After some reading I was told not to create a forward compatible context. And that I should use wglChoosePixelFormatARB to get the appropriate pixel format. Thinking this was the issue, I tried to use this function, it didn't help.


Told by whom. The only thing to be aware of with creating compatiblity context is deprecation, and even if the driver supported a core context, the deprecated functionality is still present in the driver, just not accessible from the core profile context. Feel free to use whatever context that suits you taste. In general its always good to stay away from deprecated functionality in whatever software library you are using, and OpenGL shouldn't be any different in that regards.

Share this post


Link to post
Share on other sites

The wiki says "A forward compatible context must fully remove deprecated features in the version that it returns; you should never actually use this." It's actually bolded on wiki so I took it at face value. Perhaps the wiki is overzealous. It seems like it would be a good idea to not include deprecated features, however I suppose if somewhere down the line features I'm using became deprecated and I asked for the latest context without deprecated features I would suddenly break my program. That seems like a far out case though.

Interestingly enough if I set major to 1 and minor to 0. I get a 3.3 context. I tried updating the drivers, same results.

Share this post


Link to post
Share on other sites

The wiki says "A forward compatible context must fully remove deprecated features in the version that it returns;you should never actually use this." It's actually bolded on wiki so I took it at face value. Perhaps the wiki is overzealous. It seems like it would be a good idea to not include deprecated features, however I suppose if somewhere down the line features I'm using became deprecated and I asked for the latest context without deprecated features I would suddenly break my program. That seems like a far out case though.

They are just paraphrasing NVidia's guidelines there. You would expect a Core context to be more efficient, given that all deprecated functionality can be removed, but in some cases a single driver implements both Core and non-Core contexts, and in the Core case, it may have to add a bunch of extra error checking to make sure you don't call non-Core functionality.

I generally recommend that you use a Core context for development, and if you feel it necessary, switch to a non-Core context for release.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627735
    • Total Posts
      2978848
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now