• Advertisement
Sign in to follow this  

OpenGL Creating an OpenGL context on Windows with Glew.

This topic is 1304 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Something weird has happened. I had context creation working just fine. It worked on my desktop and my laptop and I was happily plugging along doing graphics programming. One day I attempt to build my code on my laptop again, suddenly context creation starts to fail. I don't know what has changed.

 

My original context creation code looks like this

https://gist.github.com/LordTocs/f227528a729986df9643

 

It's sloppy and has next to no error handling but it at least worked. It still works on my desktop and fails on my laptop.

 

Specifically wglCreateContextAttribsARB fails.

 

I started to modify the code in an attempt to figure out what was wrong. I added some "GetLastError()" print outs in hopes I was doing something silly and it would tell me what was wrong. Using "FormatMessage()" to change error code into readable stings.

 

Instead of a usable error I was greeted with GetLastError() returning 3221692565. Which FormatMessage() had no idea what to do with. A quick cursory internet search lead me to a single result on the opengl forums. Which didn't yield any results

 

After some reading I was told not to create a forward compatible context. And that I should use wglChoosePixelFormatARB to get the appropriate pixel format. Thinking this was the issue, I tried to use this function, it didn't help.

 

So now I'm left with this code that doesn't work and I'm very confused.

void DisplayWindowsError()
{
	LPVOID lpMsgBuf;
	DWORD dw = GetLastError();

	FormatMessage(
		FORMAT_MESSAGE_ALLOCATE_BUFFER |
		FORMAT_MESSAGE_FROM_SYSTEM |
		FORMAT_MESSAGE_IGNORE_INSERTS,
		NULL,
		dw,
		MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),
		(LPTSTR)&lpMsgBuf,
		0, NULL);

	if (lpMsgBuf)
	{
		std::cout << "Windows Error:" << dw << ": " << (char *)lpMsgBuf << std::endl;
		LocalFree(lpMsgBuf);
	}
	else
	{
		std::cout << "Windows Error:" << dw << ": unknown" <<  std::endl;
	}
}

GraphicsContext::GraphicsContext(ContextTarget &target)
	: Target (target)
{
	PIXELFORMATDESCRIPTOR pfd =			// pfd Tells Windows How We Want Things To Be
	{
		sizeof(PIXELFORMATDESCRIPTOR),	// Size Of This Pixel Format Descriptor
		1,								// Version Number
		PFD_DRAW_TO_WINDOW |			// Format Must Support Window
		PFD_SUPPORT_OPENGL |			// Format Must Support OpenGL
		PFD_DOUBLEBUFFER,				// Must Support Double Buffering
		PFD_TYPE_RGBA,					// Request An RGBA Format
		32,								// Select Our Color Depth
		0, 0, 0, 0, 0, 0,				// Color Bits Ignored
		0,								// No Alpha Buffer
		0,								// Shift Bit Ignored
		0,								// No Accumulation Buffer
		0, 0, 0, 0,						// Accumulation Bits Ignored
		24,								// 32Bit Z-Buffer (Depth Buffer)
		8,								// No Stencil Buffer
		0,								// No Auxiliary Buffer
		PFD_MAIN_PLANE,					// Main Drawing Layer
		0,								// Reserved
		0, 0, 0								// Layer Masks Ignored
	};
	PixelFormat = 1;
	if (!(PixelFormat = ChoosePixelFormat (target.GetHDC (), &pfd)))
	{
		DisplayWindowsError();
		cout << "Failed to choose pixel format." << endl;
	}

	if (!SetPixelFormat(target.GetHDC(),PixelFormat, &pfd))
	{
		//DestroyGameWindow (); //Insert Error
		DisplayWindowsError();
		cout << "Failed to set pixel format." << endl;
	}

	HGLRC temp;
	temp = wglCreateContext(target.GetHDC());
	if (!temp)
	{
		//DestroyGameWindow (); //Insert Error
		cout << "Failed to create context" << endl;
		
	}
	DisplayWindowsError();
	if (!wglMakeCurrent(target.GetHDC (), temp))
	{
		//DestroyGameWindow (); 
		cout << "Failed to make current." << endl;
		GLErrorCheck();
	}
	DisplayWindowsError();

	GLenum err = glewInit();

	if (err != GLEW_OK)
	{
		char *error = (char *)glewGetErrorString(err);
		cout << "GLEW INIT FAIL: " << error << endl;
	}

	int contextattribs [] = 
	{
		WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
		WGL_CONTEXT_MINOR_VERSION_ARB, 2,
#ifdef _DEBUG
		WGL_CONTEXT_FLAGS_ARB,  WGL_CONTEXT_DEBUG_BIT_ARB,
#endif
		0
	};

	int pfattribs[] =
	{
		WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
		WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
		WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
		WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
		WGL_COLOR_BITS_ARB, 32,
		WGL_DEPTH_BITS_ARB, 24,
		WGL_STENCIL_BITS_ARB, 8,
		0
	};

	if (wglewIsSupported ("WGL_ARB_create_context") == 1)
	{
		unsigned int formatcount;

		if (!wglChoosePixelFormatARB(target.GetHDC(), pfattribs, nullptr, 1, (int *)&PixelFormat, &formatcount))
		{
			std::cout << "Failed to find a matching pixel format" << std::endl;
			DisplayWindowsError();
		}

		if (!SetPixelFormat(target.GetHDC(), PixelFormat, &pfd))
		{
			DisplayWindowsError();
			std::cout << "Failed to set pixelformat" << std::endl;
		}


		hRC = wglCreateContextAttribsARB(Target.GetHDC(), nullptr, contextattribs);
		if (!hRC)
		{
			DisplayWindowsError();
			std::cout << "Failed to create context." << std::endl;
		}
		wglMakeCurrent(nullptr, nullptr);
		DisplayWindowsError();
		wglDeleteContext(temp);
		DisplayWindowsError();
		GLErrorCheck();
		MakeCurrent ();
		
	}
	else
	{
		cout << "Failed to create context again..." << endl;
	}

#ifdef _DEBUG
	glEnable(GL_DEBUG_OUTPUT);
	glDebugMessageCallback(dbgcallback, nullptr);
#endif


	char *shadeversion = (char *)glGetString (GL_SHADING_LANGUAGE_VERSION);
	//GLErrorCheck;
	char *version = (char *)glGetString(GL_VERSION);
	//GLErrorCheck;
	std::cout << "Version: " << version << std::endl << "Shading Version: " << shadeversion << std::endl;

	glViewport (0,0,Target.GetWidth (), Target.GetHeight ());
	GLErrorCheck ();


	SetClearColor (Color(0,0,0,0));
	SetClearDepth(1000.0f);
	//EnableDepthBuffering ();
	//DisableDepthTest ();
	NormalBlending ();

	

	glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST); //Doesn't get Abstracted
	GLErrorCheck();



	//glLoadIdentity ();
}

Gist mirror https://gist.github.com/LordTocs/9266d8c8f7e3eb9a498e

 

If anyone knows what I'm doing wrong I'd love to know.

Thanks.

 

 

 

 

 

 

Share this post


Link to post
Share on other sites
Advertisement

Using the code from your first link try

 

    int attribs [] =
    {
        WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
        WGL_CONTEXT_MINOR_VERSION_ARB, 2,
        WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
#ifdef _DEBUG
        WGL_CONTEXT_FLAGS_ARBWGL_CONTEXT_FORWARD_COMPATIBLE_BIT | WGL_CONTEXT_DEBUG_BIT_ARB,
#else
        WGL_CONTEXT_FLAGS_ARBWGL_CONTEXT_FORWARD_COMPATIBLE_BIT,
#endif
        0
    };
 
I think "WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB," is needed for versions greater then 3.2

Share this post


Link to post
Share on other sites

Hmm I placed it into the "contextattribs []" array in the new code. Still getting the same result. Thanks for the help.

Share this post


Link to post
Share on other sites

I suppose there isn't really a great reason to not use GLFW. Other than I do my own window creation as well. I have a little UI framework that the context creation hooks into as well. I suppose I could fiddle with GLFW to get it all to work in harmony. But I've been using my own context creation for like a year now, it only recently decided to die on my laptop. Is there any reason I can't use my own context creation? It's part "not invented here" syndrome and part curiosity as to what I'm doing wrong. Perhaps I'll look at GLFW's source for hints.

Share this post


Link to post
Share on other sites

I noticed 3221692565 is 0xC0072095. According to the NVidia create context spec 0x2095 is ERROR_INVALID_VERSION_ARB. So I bumped the version down to 3.3 and it successfully creates a context. Which raises more questions because I should be able to create a 4.2 context. I need a 4.2 context because of shader_storage_buffers and other things.

 

TXFvMiR.png

 

cQfKgGK.png

Share this post


Link to post
Share on other sites

Your driver reports OpenGL 4.3, *not* 4.2. Try creating a 4.3 context and see what happens.

 

OpenGL drivers are notoriously picky about version specifications - on my MacBook the drivers require I specify the exact version supported by the driver, not any previous major/minor version.

Edited by swiftcoder

Share this post


Link to post
Share on other sites

Interesting thought. Alas, 4.3 fails as well.

 

Here's where the code's at now.


void DisplayWindowsError()
{
	LPVOID lpMsgBuf;
	DWORD dw = GetLastError();

	FormatMessage(
		FORMAT_MESSAGE_ALLOCATE_BUFFER |
		FORMAT_MESSAGE_FROM_SYSTEM |
		FORMAT_MESSAGE_IGNORE_INSERTS,
		NULL,
		dw,
		MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),
		(LPTSTR)&lpMsgBuf,
		0, NULL);

	if (lpMsgBuf)
	{
		std::cout << "Windows Error:" << dw << ": " << (char *)lpMsgBuf << std::endl;
		LocalFree(lpMsgBuf);
	}
	else
	{
		std::cout << "Windows Error:" << dw << ": unknown " << std::hex << dw <<  std::endl;
	}
}

GraphicsContext::GraphicsContext(ContextTarget &target)
	: Target (target)
{
	PIXELFORMATDESCRIPTOR pfd =			// pfd Tells Windows How We Want Things To Be
	{
		sizeof(PIXELFORMATDESCRIPTOR),	// Size Of This Pixel Format Descriptor
		1,								// Version Number
		PFD_DRAW_TO_WINDOW |			// Format Must Support Window
		PFD_SUPPORT_OPENGL |			// Format Must Support OpenGL
		PFD_DOUBLEBUFFER,				// Must Support Double Buffering
		PFD_TYPE_RGBA,					// Request An RGBA Format
		32,								// Select Our Color Depth
		0, 0, 0, 0, 0, 0,				// Color Bits Ignored
		0,								// No Alpha Buffer
		0,								// Shift Bit Ignored
		0,								// No Accumulation Buffer
		0, 0, 0, 0,						// Accumulation Bits Ignored
		24,								// 32Bit Z-Buffer (Depth Buffer)
		8,								// No Stencil Buffer
		0,								// No Auxiliary Buffer
		PFD_MAIN_PLANE,					// Main Drawing Layer
		0,								// Reserved
		0, 0, 0								// Layer Masks Ignored
	};
	PixelFormat = 1;
	if (!(PixelFormat = ChoosePixelFormat (target.GetHDC (), &pfd)))
	{
		DisplayWindowsError();
		cout << "Failed to choose pixel format." << endl;
	}

	if (!SetPixelFormat(target.GetHDC(),PixelFormat, &pfd))
	{
		//DestroyGameWindow (); //Insert Error
		DisplayWindowsError();
		cout << "Failed to set pixel format." << endl;
	}

	HGLRC temp;
	temp = wglCreateContext(target.GetHDC());
	if (!temp)
	{
		//DestroyGameWindow (); //Insert Error
		cout << "Failed to create context" << endl;
		
	}
	DisplayWindowsError();
	if (!wglMakeCurrent(target.GetHDC (), temp))
	{
		//DestroyGameWindow (); 
		cout << "Failed to make current." << endl;
		GLErrorCheck();
	}
	DisplayWindowsError();

	GLenum err = glewInit();

	if (err != GLEW_OK)
	{
		char *error = (char *)glewGetErrorString(err);
		cout << "GLEW INIT FAIL: " << error << endl;
	}

	int contextattribs [] = 
	{
		WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
		WGL_CONTEXT_MINOR_VERSION_ARB, 3,
		WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
#ifdef _DEBUG
		WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_DEBUG_BIT_ARB,
#endif
		0
	};

	int pfattribs[] =
	{
		WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
		WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
		WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
		WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
		WGL_COLOR_BITS_ARB, 32,
		WGL_DEPTH_BITS_ARB, 24,
		WGL_STENCIL_BITS_ARB, 8,
		//WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
		0
	};

	if (wglewIsSupported ("WGL_ARB_create_context") == 1)
	{
		unsigned int formatcount;

		if (!wglChoosePixelFormatARB(target.GetHDC(), pfattribs, nullptr, 1, (int *)&PixelFormat, &formatcount))
		{
			std::cout << "Failed to find a matching pixel format" << std::endl;
			DisplayWindowsError();
		}

		if (!SetPixelFormat(target.GetHDC(), PixelFormat, &pfd))
		{
			DisplayWindowsError();
			std::cout << "Failed to set pixelformat" << std::endl;
		}


		hRC = wglCreateContextAttribsARB(Target.GetHDC(), nullptr, contextattribs);
		if (!hRC)
		{
			DisplayWindowsError();
			std::cout << "Failed to create context." << std::endl;
		}
		wglMakeCurrent(nullptr, nullptr);
		DisplayWindowsError();
		wglDeleteContext(temp);
		DisplayWindowsError();
		GLErrorCheck();
		MakeCurrent ();
		
	}
	else
	{
		cout << "Failed to create context again..." << endl;
	}

#ifdef _DEBUG
	glEnable(GL_DEBUG_OUTPUT);
	glDebugMessageCallback(dbgcallback, nullptr);
#endif


	char *shadeversion = (char *)glGetString (GL_SHADING_LANGUAGE_VERSION);
	//GLErrorCheck;
	char *version = (char *)glGetString(GL_VERSION);
	//GLErrorCheck;
	std::cout << "Version: " << version << std::endl << "Shading Version: " << shadeversion << std::endl;

	glViewport (0,0,Target.GetWidth (), Target.GetHeight ());
	GLErrorCheck ();


	SetClearColor (Color(0,0,0,0));
	SetClearDepth(1000.0f);
	//EnableDepthBuffering ();
	//DisableDepthTest ();
	NormalBlending ();

	

	glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST); //Doesn't get Abstracted
	GLErrorCheck();



	//glLoadIdentity ();
}
 
Outputs:
Windows Error:0: The operation completed successfully.
Windows Error:0: The operation completed successfully.
Windows Error:3221692565: unknown c0072095
Failed to create context.
 
Maybe my drivers need updating or something...

Share this post


Link to post
Share on other sites

Setting the major version to 1 and the minor version to 0 in the context attrbute list gives you the latest/highest version supported by the driver at least for compatibility profile context.


After some reading I was told not to create a forward compatible context. And that I should use wglChoosePixelFormatARB to get the appropriate pixel format. Thinking this was the issue, I tried to use this function, it didn't help.


Told by whom. The only thing to be aware of with creating compatiblity context is deprecation, and even if the driver supported a core context, the deprecated functionality is still present in the driver, just not accessible from the core profile context. Feel free to use whatever context that suits you taste. In general its always good to stay away from deprecated functionality in whatever software library you are using, and OpenGL shouldn't be any different in that regards.

Share this post


Link to post
Share on other sites

The wiki says "A forward compatible context must fully remove deprecated features in the version that it returns; you should never actually use this." It's actually bolded on wiki so I took it at face value. Perhaps the wiki is overzealous. It seems like it would be a good idea to not include deprecated features, however I suppose if somewhere down the line features I'm using became deprecated and I asked for the latest context without deprecated features I would suddenly break my program. That seems like a far out case though.

Interestingly enough if I set major to 1 and minor to 0. I get a 3.3 context. I tried updating the drivers, same results.

Share this post


Link to post
Share on other sites

The wiki says "A forward compatible context must fully remove deprecated features in the version that it returns;you should never actually use this." It's actually bolded on wiki so I took it at face value. Perhaps the wiki is overzealous. It seems like it would be a good idea to not include deprecated features, however I suppose if somewhere down the line features I'm using became deprecated and I asked for the latest context without deprecated features I would suddenly break my program. That seems like a far out case though.

They are just paraphrasing NVidia's guidelines there. You would expect a Core context to be more efficient, given that all deprecated functionality can be removed, but in some cases a single driver implements both Core and non-Core contexts, and in the Core case, it may have to add a bunch of extra error checking to make sure you don't call non-Core functionality.

I generally recommend that you use a Core context for development, and if you feel it necessary, switch to a non-Core context for release.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By reenigne
      For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
      I also run grhmedia.com where I host the projects and code for the tutorials I have online.
      Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

      Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
      Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

      At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. 

      I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. 
      I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

      I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
      I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
      I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
      Maybe another fee if it is related but not directly in the content. 
      The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

      So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
      Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

      I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos. 
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
  • Advertisement