• Advertisement
Sign in to follow this  

OpenGL Knowing my REAL OpenGL version - RESOLVED

This topic is 1231 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I have an NVidia GeForce 525M. I am trying to use NVidia NSight debugger, as suggested by a user in one of my recent threads. For most of the capabilities of NSight, OpenGL 4.2 is required. 

 

I looked up info on my graphics card: http://en.wikipedia.org/wiki/GeForce_500_Series#GeForce_500M_.285xxM.29_series

it looks like 4.2 is supported.

 

I also made use of GlView: http://www.realtech-vr.com/glview/ to check for 4.2 support.

 

While running NSight however, it complains that I have OpenGL 3. All my shaders use GLSL version 420 and they all work just fine. I have downloaded the latest drivers from my card as well.

 

I am using Glew 1.9.0 which should give me OpenGL 4.3 support...

 

Any idea why NSight would say I have OpenGL 3?

 

(edit) 

 

I made use of the calls: 

 

glGetIntegerv(GL_MAJOR_VERSION,*);

glGetIntegerv(GL_MINOR_VERSION,*);

 

both return 4. 

 

So my card supports 4.2, my version of Glew supports 4.3, OpenGL queries say I have 4.4, and NSight tells me I have 3.0.. hahaha

Edited by mikev

Share this post


Link to post
Share on other sites
Advertisement

Thanks, Spiro!

 

That's almost definitely the problem.. working on setting the OpenGL contexts version settings..

GLint attribs[] =
	{
	// Here we ask for OpenGL 4.2
	WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
	WGL_CONTEXT_MINOR_VERSION_ARB, 2,
	// Uncomment this for forward compatibility mode
	//WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
	// Uncomment this for Compatibility profile
	//WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
	// We are using Core profile here
	WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
	0
	};


	if (!(mhRC = wglCreateContextAttribsARB(mhDC, 0, attribs)))	

This is throwing an access violation exception for me though.. trying to figure that out at the moment...

Share this post


Link to post
Share on other sites

Alright, I'm at my wits end with this thing.

 

So, my understanding is that to create a GL Context with a specific version, I need to use wglCreateContextAttribsARB

 

This is a little messy because its an OpenGL extension, so OpenGL has to be initialized before you can use it - meaning you need to create a dummy GL Context just so you can initialize your desired context - weird but okay.

 

So I create a context as I was before, then call glewInit(), then delete my context, and create a new one with wglCreateContextAttribsARB. I pass in the following attributes:

GLint attribs[] =
{
// Here we ask for OpenGL 4.2
WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
0
};

This particular combination of attributes successfully creates a new context, but NSight still complains that the OpenGL version is 3.0.0. In addition, if I replace the WGL_CONTEXT_PROFILE_MASK_ARB value to be: "WGL_CONTEXT_CORE_PROFILE_BIT_AR" - my code throws no exceptions but my screen goes totally blank.

 

The context returned by wglCreateContextAttribsARB is non zero though, so I believe it did create a valid context.

 

Another peculiar observation, if I run in full screen mode, using Windows' ChangeDisplaySettings( ) function, NSight tells me the OpenGL version is 1.1...

 

In either case, it looks like my version number attributes are being ignored. sad.png

 

Edited by mikev

Share this post


Link to post
Share on other sites

Here is my full context creation code (a bit sloppy I know)

BOOL GLContext::CreateGLWindow(char* title, int width, int height, int bits, bool fullscreenflag)
{
	
	GLuint		PixelFormat;			// Holds The Results After Searching For A Match
	WNDCLASS	wc;						// Windows Class Structure
	DWORD		dwExStyle;				// Window Extended Style
	DWORD		dwStyle;				// Window Style
	RECT		WindowRect;				// Grabs Rectangle Upper Left / Lower Right Values
	WindowRect.left=(long)0;			// Set Left Value To 0
	WindowRect.right=(long)width;		// Set Right Value To Requested Width
	WindowRect.top=(long)0;				// Set Top Value To 0
	WindowRect.bottom=(long)height;		// Set Bottom Value To Requested Height

	fullscreen=fullscreenflag;			// Set The Global Fullscreen Flag

	hInstance			= GetModuleHandle(NULL);				// Grab An Instance For Our Window
	wc.style			= CS_HREDRAW | CS_VREDRAW | CS_OWNDC;	// Redraw On Size, And Own DC For Window.
	wc.lpfnWndProc		= (WNDPROC) NeHeWndProc;					// WndProc Handles Messages
	wc.cbClsExtra		= 0;									// No Extra Window Data
	wc.cbWndExtra		= 0;									// No Extra Window Data
	wc.hInstance		= hInstance;							// Set The Instance
	wc.hIcon			= LoadIcon(NULL, IDI_WINLOGO);			// Load The Default Icon
	wc.hCursor			= LoadCursor(NULL, IDC_ARROW);			// Load The Arrow Pointer
	wc.hbrBackground	= NULL;									// No Background Required For GL
	wc.lpszMenuName		= NULL;									// We Don't Want A Menu
	wc.lpszClassName	= "OpenGL";								// Set The Class Name


	if (!RegisterClass(&wc))									// Attempt To Register The Window Class
	{
		MessageBox(NULL,"Failed To Register The Window Class.","ERROR",MB_OK|MB_ICONEXCLAMATION);
		return FALSE;											// Return FALSE
	}

	if (fullscreen)												// Attempt Fullscreen Mode?
	{
		DEVMODE dmScreenSettings;								// Device Mode
		memset(&dmScreenSettings,0,sizeof(dmScreenSettings));	// Makes Sure Memory's Cleared
		dmScreenSettings.dmSize=sizeof(dmScreenSettings);		// Size Of The Devmode Structure
		dmScreenSettings.dmPelsWidth	= width;				// Selected Screen Width
		dmScreenSettings.dmPelsHeight	= height;				// Selected Screen Height
		dmScreenSettings.dmBitsPerPel	= bits;					// Selected Bits Per Pixel
		dmScreenSettings.dmFields=DM_BITSPERPEL|DM_PELSWIDTH|DM_PELSHEIGHT;

		// Try To Set Selected Mode And Get Results.  NOTE: CDS_FULLSCREEN Gets Rid Of Start Bar.
		if (ChangeDisplaySettings(&dmScreenSettings,CDS_FULLSCREEN)!=DISP_CHANGE_SUCCESSFUL)
		{
			// If The Mode Fails, Offer Two Options.  Quit Or Use Windowed Mode.
			if (MessageBox(NULL,"The Requested Fullscreen Mode Is Not Supported By\nYour Video Card. Use Windowed Mode Instead?","NeHe GL",MB_YESNO|MB_ICONEXCLAMATION)==IDYES)
			{
				fullscreen=FALSE;		// Windowed Mode Selected.  Fullscreen = FALSE
			}
			else
			{
				// Pop Up A Message Box Letting User Know The Program Is Closing.
				MessageBox(NULL,"Program Will Now Close.","ERROR",MB_OK|MB_ICONSTOP);
				return FALSE;									// Return FALSE
			}
		}
	}

	if (fullscreen)												// Are We Still In Fullscreen Mode?
	{
		dwExStyle=WS_EX_APPWINDOW;								// Window Extended Style
		dwStyle=WS_POPUP;										// Windows Style
		ShowCursor(FALSE);										// Hide Mouse Pointer
	}
	else
	{
		dwExStyle=WS_EX_APPWINDOW | WS_EX_WINDOWEDGE;			// Window Extended Style
		dwStyle=WS_OVERLAPPEDWINDOW;							// Windows Style
	}


	AdjustWindowRectEx(&WindowRect, dwStyle, FALSE, dwExStyle);		// Adjust Window To True Requested Size

	// Create The Window
	if (!(mhWnd=CreateWindowEx(	dwExStyle,							// Extended Style For The Window
								"OpenGL",							// Class Name
								title,								// Window Title
								dwStyle |							// Defined Window Style
								WS_CLIPSIBLINGS |					// Required Window Style
								WS_CLIPCHILDREN,					// Required Window Style
								0, 0,								// Window Position
								WindowRect.right-WindowRect.left,	// Calculate Window Width
								WindowRect.bottom-WindowRect.top,	// Calculate Window Height
								NULL,								// No Parent Window
								NULL,								// No Menu
								hInstance,							// Instance
								NULL)))								// Dont Pass Anything To WM_CREATE
	{
		KillGLWindow();								// Reset The Display
		MessageBox(NULL,"Window Creation Error.","ERROR",MB_OK|MB_ICONEXCLAMATION);
		return FALSE;								// Return FALSE
	}


	static	PIXELFORMATDESCRIPTOR pfd=				// pfd Tells Windows How We Want Things To Be
	{
		sizeof(PIXELFORMATDESCRIPTOR),				// Size Of This Pixel Format Descriptor
		1,											// Version Number
		PFD_DRAW_TO_WINDOW |						// Format Must Support Window
		PFD_SUPPORT_OPENGL |						// Format Must Support OpenGL
		PFD_DOUBLEBUFFER,							// Must Support Double Buffering
		PFD_TYPE_RGBA,								// Request An RGBA Format
		bits,										// Select Our Color Depth
		0, 0, 0, 0, 0, 0,							// Color Bits Ignored
		0,											// No Alpha Buffer
		0,											// Shift Bit Ignored
		0,											// No Accumulation Buffer
		0, 0, 0, 0,									// Accumulation Bits Ignored
		16,											// 16Bit Z-Buffer (Depth Buffer)  
		8,											// 8 bit Stencil Buffer
		0,											// No Auxiliary Buffer
		PFD_MAIN_PLANE,								// Main Drawing Layer
		0,											// Reserved
		0, 0, 0										// Layer Masks Ignored
	};
	
	if (!(mhDC=GetDC(mhWnd)))							// Did We Get A Device Context?
	{
		KillGLWindow();								// Reset The Display
		MessageBox(NULL,"Can't Create A GL Device Context.","ERROR",MB_OK|MB_ICONEXCLAMATION);
		return FALSE;								// Return FALSE
	}

	if (!(PixelFormat=ChoosePixelFormat(mhDC,&pfd)))	// Did Windows Find A Matching Pixel Format?
	{
		KillGLWindow();								// Reset The Display
		MessageBox(NULL,"Can't Find A Suitable PixelFormat.","ERROR",MB_OK|MB_ICONEXCLAMATION);
		return FALSE;								// Return FALSE
	}

	if(!SetPixelFormat(mhDC,PixelFormat,&pfd))		// Are We Able To Set The Pixel Format?
	{
		KillGLWindow();								// Reset The Display
		MessageBox(NULL,"Can't Set The PixelFormat.","ERROR",MB_OK|MB_ICONEXCLAMATION);
		return FALSE;								// Return FALSE
	}

        // create the dummy context so we can initialize GLEW
	if (!(mhRC=wglCreateContext(mhDC)))				// Are We Able To Get A Rendering Context?
	{
		KillGLWindow();								// Reset The Display
		MessageBox(NULL,"Can't Create A GL Rendering Context.","ERROR",MB_OK|MB_ICONEXCLAMATION);
		return FALSE;								// Return FALSE
	}

	if(!wglMakeCurrent(mhDC,mhRC))					// Try To Activate The Rendering Context
	{
		KillGLWindow();								// Reset The Display
		MessageBox(NULL,"Can't Activate The GL Rendering Context.","ERROR",MB_OK|MB_ICONEXCLAMATION);
		return FALSE;								// Return FALSE
	}

	 GLenum lError = glewInit();
    if (lError != GLEW_OK)
    {
		MessageBox(NULL,"Can't initialize GLEW.","ERROR",MB_OK|MB_ICONEXCLAMATION);
        return false;
    }
        // delete the dummy context
	wglDeleteContext(mhRC);

	const int attribList[] =
	{
		WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
		WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
		WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
		WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
		WGL_COLOR_BITS_ARB, 32,
		WGL_DEPTH_BITS_ARB, 24,
		WGL_STENCIL_BITS_ARB, 8,
		0,        //End
	};

	int pixelFormat;
	UINT numFormats;

	wglChoosePixelFormatARB(mhDC, attribList, NULL, 1, &pixelFormat, &numFormats);

	GLint attribs[] =
	{
		// Here we ask for OpenGL 4.2
		WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
		WGL_CONTEXT_MINOR_VERSION_ARB, 2,
		// Uncomment this for forward compatibility mode
		//WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
		// Uncomment this for Compatibility profile
		WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
		// We are using Core profile here
		//WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
		0
	};

        // create the desired GL Context
	HGLRC CompHRC = wglCreateContextAttribsARB(mhDC, 0, attribs);
	if (CompHRC && wglMakeCurrent(mhDC, CompHRC))
		mhRC = CompHRC;

        // initialize models, etc.
	init();


	ShowWindow(mhWnd,SW_SHOW);						// Show The Window
	SetForegroundWindow(mhWnd);						// Slightly Higher Priority
	SetFocus(mhWnd);									// Sets Keyboard Focus To The Window
	resize(width, height);					// Set Up Our Perspective GL Screen

	return TRUE;									// Success
}

Edited by mikev

Share this post


Link to post
Share on other sites

Can't you like, use SDL or something? Instead of doing the context initialization on your own I mean...

 

BTW, that card should support up to OpenGL 4.5.

Edited by TheChubu

Share this post


Link to post
Share on other sites

Check if wglCreateContextAttribsARB is Null before using it.

And

 

 

if (CompHRC && wglMakeCurrent(mhDC, CompHRC))
        mhRC = CompHRC;

Should be an if- else block

if (CompHRC && wglMakeCurrent(mhDC, CompHRC))
{
		mhRC = CompHRC;
}
else
{
 //Make current failed, so quit
}
Edited by Andrew Kabakwu

Share this post


Link to post
Share on other sites

Trying to make your own context creation is not going to be working for everyone. It's the first place I would look for "bugs" or workarounds. Using SDL is good advice.

EDIT: Also, seeing hDC in 2014, when does it end! You could support 3 OSes simply by using STL and SDL2. Not because you want to support all of them, but you never know.

Edited by Kaptein

Share this post


Link to post
Share on other sites

 

 

Also, seeing hDC in 2014, when does it end! 

 

hahaha, yeah I knew some people were going to be horrified when I posted that. Its some ancient code that originated from the old school NeHe tutorials.

 

I'm going to give SDL a shot and see if it fixed my problems! 

Share this post


Link to post
Share on other sites

 

I'm going to give SDL a shot and see if it fixed my problems! 

 

 

I'm horrified with suggestions to use any of the library/wrapper for OpenGL. :(

It is not easy, but it is always better to understand what's happening under the hood than to be helplessly dependent on others.

Despite of its imperfections, OpenGL is still the best 3D graphics API for me, since I can do whatever I want having to install just the latest drivers and nothing more.

Of course, I need also a developing environment (read Visual Studio). Nobody wants to code in notepad and compile in command prompt.

 

Let's back to your problem. Before going any further revise your pixel format. It is not correct. The consequence is turning off HW acceleration and switching to OpenGL 1.1.

Tho following code snippet shows how to create valid GL context:

PIXELFORMATDESCRIPTOR pfd ;
    memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
    pfd.nSize  = sizeof(PIXELFORMATDESCRIPTOR);
    pfd.nVersion   = 1; 
    pfd.dwFlags    = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW;   
    pfd.iPixelType = PFD_TYPE_RGBA; 
    pfd.cColorBits = 32;
    pfd.cDepthBits = 24; 
    pfd.iLayerType = PFD_MAIN_PLANE;
 
nPixelFormat = ChoosePixelFormat(hDC, &pfd);
 
if (nPixelFormat == 0)
    {
strcat_s(m_sErrorLog, LOGSIZE, "ChoosePixelFormat failed.\n");
return false;
    }   
DWORD error = GetLastError();
BOOL bResult = SetPixelFormat(hDC, nPixelFormat, &pfd);
if (!bResult)
    {
error = GetLastError();
strcat_s(m_sErrorLog, LOGSIZE, "SetPixelFormat failed.\n");
return false;
    }
 
HGLRC tempContext = wglCreateContext(hDC); 
wglMakeCurrent(hDC,tempContext);
 
int attribs[] =
{
WGL_CONTEXT_MAJOR_VERSION_ARB, major,
WGL_CONTEXT_MINOR_VERSION_ARB, minor, 
WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_DEBUG_BIT_ARB, // I suggest using debug context in order to know whats really happening and easily catch bugs
WGL_CONTEXT_PROFILE_MASK_ARB, nProfile, // nProfile = WGL_CONTEXT_CORE_PROFILE_BIT_ARB or WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB
0
};
 
PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB = NULL;
wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC) wglGetProcAddress("wglCreateContextAttribsARB");
if(wglCreateContextAttribsARB != NULL)
{
context = wglCreateContextAttribsARB(hDC, 0, attribs);
}
wglMakeCurrent(NULL,NULL); 
wglDeleteContext(tempContext);

Share this post


Link to post
Share on other sites

"Father, forgive them, for they do not know what they are doing." sad.png 

 

That is bad advice, aks9. There's nothing to learn when it comes to context creation, other than what a nightmare it can be if you have older (or buggy) drivers. 

 

 

This is a typical agnostic claim. Everything is a source of knowledge. A rendering context creation is the first thing one should learn when starting with computer graphics.

But OK, I don't have time or will to argue about that.

 

Could you post a link, example or whatever to illustrate "the nightmare"? I've been creating GL contexts by myself about 18 years already and never had a problem. The problems could arise if you create GL 3.0+ context and hope everyone support it. Well, that is not a problem of drivers. Older drivers cannot assume what might happen in the future.

 

If the drivers are buggy, there is no workaround for the problem! 

 

I just look at the whole thing as risky, since if you take that code with you to other projects, one day one of the people trying the game/program out will simply not be able to run it because their driver requires a workaround.

 

I really don't understand this.What kind of workaround? The way how a GL context is created is defined by the specification. Why risky?

 

 

 

You can do the same exact things with SDL or any other library, except you will have less grief during the process. Most of the time, anyways.

 

 

I want to have control in my hands so no intermediary stuff is welcome. It is harder at the start, but the filling of freedom is priceless.

 

 

 

This link is totally out of context. The guy is frustrated by something, but give no arguments for his claims.

Considering platform specific APIs for porting OpenGL, there was an initiative to make them unique. Khronos started development of EGL, but it is not adopted for desktop OpenGL yet.

 

 

 

EDIT: I downvoted you aks9, but I can't undo it. sad.png Your post is helpful, so I'm sorry.

 

Don't be sorry. That was your opinion and you have right to express it through (down)voting. Points means really nothing to me.

Forums should be the way to share knowledge and opinions. Some of them are true, some not. I hope right advices still prevail on the behalf of users.

Share this post


Link to post
Share on other sites

Actually, for me it was because I was doing the same thing as you. But it didn't work on my brothers school-provided laptop. It had an older Intel model, and nothing I did worked. In the end I just gave up and used an existing library.

Share this post


Link to post
Share on other sites

Thanks so much for your suggestions, Aks9 and Kaptein!

 

Before seeing your suggestions on Pixel format, Aks9, I gave SDL a quick spin. I used the following example code:

 

https://github.com/meandmark/SDLOpenGLIntro/tree/sdl2

 

Inside GameApp.cpp I modified the context creation to use a paticular version:

void GameApp::InitializeSDL(Uint32 width, Uint32 height, Uint32 flags)
{
    int error;
    
    error = SDL_Init(SDL_INIT_EVERYTHING);
    // Turn on double buffering.
    SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
    // Set Context version number.
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3); 
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2); 
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE); 

    // Create the window
    mainWindow = SDL_CreateWindow("SDL2 OpenGL Example", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, width, height, flags);
    mainGLContext = SDL_GL_CreateContext(mainWindow);
                        
}

I even tried older versions (like 2.0) and NSight still insists my OpenGL version is 3.0.

 

I would try your suggestions regarding PixelFormat, Aks9, but if SDL is failing to set the version it seems to me something else might be going on.

 

I wonder if I should call shenanigans on NSight, or on my graphics hardware.. really this is so frustrating!

Edited by mikev

Share this post


Link to post
Share on other sites

I answered a similar question a while ago: http://www.gamedev.net/topic/661317-help-with-using-wgl/?view=findpost&p=5182765

 

It shows all the necessary steps to properly create a context:

 

create a dummy window

select a pixelformat

create a GL rendering context

call glewInit()

destroy the GL rendering context

destroy window

 

then create a new window

choose a ARB pixel format

select the pixel format

create a ARB rendering context

 

You should then be good to go.

 

However, it could be that NSight has a bug in it...

Share this post


Link to post
Share on other sites

 

I even tried older versions (like 2.0) and NSight still insists my OpenGL version is 3.0.

 

 

What does glGetString(GL_VERSION) say?

That should returned the highest GL version supported by the driver.

 

Specification is clear:

 

 

The attribute names WGL_CONTEXT_MAJOR_VERSION_ARB and WGL_CONTEXT_MINOR_VERSION_ARB request an OpenGL context supporting the specified version of the API. If successful, the context returned must be backwards compatible with the context requested.

 

 

So, if you require GL 2.0 context, you could legitimately get GL 4.4 compatibility profile, since it is backward compatible with 2.0.

 

P.S. My browser or the engine that powers up this site, or both in a combination, are "lucid". All I typed down was in the same font and size, but the outcome is ridiculous. dry.png 

Edited by Aks9

Share this post


Link to post
Share on other sites
create a dummy window
select a pixelformat
create a GL rendering context
call glewInit()
destroy the GL rendering context
destroy window

then create a new window
choose a ARB pixel format
select the pixel format

create a ARB rendering context

I am not even sure this is 100% legitimate, although I guess that it will probably work (or, possibly, it will "work").

 

There is no good reason why creating a context should require an already existing one (there's no good reason for a couple of things related to OpenGL contexts, though), but to the letter of the rulebook, you can only legitimately call GL and WGL functions if you have a valid context, and any function pointers that you have obtained are valid for only that context, thus they become invalid once you delete the context (although WGL guarantees as a "non-standard feature" that function pointers for contexts that have the same pixel format are all the same and interchangeable).

 

Which means that the sequence delete dummy context - delete dummy window - create new window - call WGL function to create new context is at least in theory bad mojo. I don't know if it factually matters, but my context creation code (which seems to work fine) only deletes the dummy stuff after creating the real one (and only initializes all function pointers on that one, all it does with the dummy context is pull the function pointer for wglCreateContextAttribsARB). In theory, destroying a context could unload the shared library your function pointers refer to (I highly doubt that, but who can tell) or it could do some other undefined or implementation-defined stuff, like give you a 3.x compatibility context or such when you create another context  (or, a black screen).

 

That said, context creation is ugly stuff, avoid it if you have any possibility of doing so. I didn't want an external dependency like SDL back then, and also shared contexts looked like an attractive thing (what a mistake!), so all in all this looked like a valid reason to write my own. Father they do not know what they are doing, indeed.

Don't do that. Just use a library that works.

Edited by samoth

Share this post


Link to post
Share on other sites

Thanks for all the suggestions and analysis, guys.

 

What does glGetString(GL_VERSION) say?

 

 

I used this information: https://www.opengl.org/wiki/Get_Context_Info

 

My headers don't seem to include the above function so I use this:

 

glGetIntegerv(GL_MAJOR_VERSION,*);

glGetIntegerv(GL_MINOR_VERSION,*);

 

 

Both of them return 4, (although I haven't tried this when using context initialization with SDL, will give that a shot). So it looks like it is creating an OpenGL 4.4 context, but I don't fully trust anything at the moment, with all these contradictory results.

 

I think I need to start  more thoroughly questioning NSight and where it is reading its version information. Not sure what else it could be...

 

Could it possibly be my OpenGL headers that are somehow screwing things up?

Edited by mikev

Share this post


Link to post
Share on other sites
They shouldn't affect NSight. But are you using the latest headers anyway? You need glext.h & wglext.h https://www.opengl.org/registry/

 

 

 

I am using glew version 1.9.0, and including gl.h. Not exactly sure what version gl.h is using, I never had any issues with it before now.   

Edited by mikev

Share this post


Link to post
Share on other sites

 

 

You're absolutely correct. I used to create a dummy window with my own responsibility on getting the wgl functions. Then creating the second windows calling glew functions.

 

I need to rethink this - is calling glew safe on two separate contexts? I suspect so with 'normal' back buffers...

Edited by mark ds

Share this post


Link to post
Share on other sites

So I finally get pissed enough to contact NVidia directly. Their response:

 

Hi thegeneralsolution,

Seems you are using Optimus system, the OGL 3.3 requirement is for Nsight in Visual Studio, but not for your sample [your sample is under 4.4 as the return value from glGetIntegerv]. That because Nsight in Visual Studio will use OGL 3.3 to render some texture, geometry, etc. Due to your Optimus system, that choose Intel GPU as Visual Studio's render device, which may only support OGL 3.0 in your machine.

Please try to disable Intel GPU in BIOS, or force Visual Studio to use NV GPU as render device. That will solve your issue.

Thanks
An

 

 

Its true my system has an integrated chip alongside an NVidia chip, so this is a very interesting and promising lead. I hate when they say "This will solve your issue" though. So presumptuous!

 

I will love them forever if this fixes my problem though... 

Edited by mikev

Share this post


Link to post
Share on other sites

That's because every problem with nvidia on linux is optimus, and optimus alone. :)

 

At least according to google:

"optimus linux problem" --> About 4,020,000 results

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By sergio2k18
      Hi all
      this is my first post on this forum.
      First of all i want to say you that i've searched many posts on this forum about this specific argument, without success, so i write another one....
      Im a beginner.
      I want use GPU geometry clipmaps algorithm to visualize virtual inifinte terrains. 
      I already tried to use vertex texture fetch with a single sampler2D with success.
       
      Readed many papers about the argument and all speak about the fact that EVERY level of a geometry clipmap, has its own texture. What means this exactly? i have to 
      upload on graphic card a sampler2DArray?
      With a single sampler2D is conceptually simple. Creating a vbo and ibo on cpu (the vbo contains only the positions on X-Z plane, not the heights)
      and upload on GPU the texture containing the elevations. In vertex shader i sample, for every vertex, the relative height to te uv coordinate.
      But i can't imagine how can i reproduce various 2d footprint for every level of the clipmap. The only way i can imagine is follow:
      Upload the finer texture on GPU (entire heightmap). Create on CPU, and for each level of clipmap, the 2D footprints of entire clipmap.
      So in CPU i create all clipmap levels in terms of X-Z plane. In vertex shader sampling these values is simple using vertex texture fetch.
      So, how can i to sample a sampler2DArray in vertex shader, instead of upload a sampler2D of entire clipmap?
       
       
      Sorry for my VERY bad english, i hope i have been clear.
       
    • By too_many_stars
      Hello Everyone,
      I have been going over a number of books and examples that deal with GLSL. It's common after viewing the source code to have something like this...
      class Model{ public: Model(); void render(); private: GLSL glsl_program; }; ////// .cpp Model::Model(){ glsl_program.compileAndLinkShaders() } void Model::render(){ glsl_program.use() //render something glsl_program.unUse(); } Is this how a shader program should be used in real time applications? For example, if I have a particle class, for every particle that's created, do I want to compiling and linking a vertex, frag shader? It seems to a noob such as myself this might not be the best approach to real time applications.
      If I am correct, what is the best work around?
      Thanks so much for all the help,
       
      Mike
       
    • By getoutofmycar
      I'm having some difficulty understanding how data would flow or get inserted into a multi-threaded opengl renderer where there is a thread pool and a render thread and an update thread (possibly main). My understanding is that the threadpool will continually execute jobs, assemble these and when done send them off to be rendered where I can further sort these and achieve some cheap form of statelessness. I don't want anything overly complicated or too fine grained,  fibers,  job stealing etc. My end goal is to simply have my renderer isolated in its own thread and only concerned with drawing and swapping buffers. 
      My questions are:
      1. At what point in this pipeline are resources created?
      Say I have a
      class CCommandList { void SetVertexBuffer(...); void SetIndexBuffer(...); void SetVertexShader(...); void SetPixelShader(...); } borrowed from an existing post here. I would need to generate a VAO at some point and call glGenBuffers etc especially if I start with an empty scene. If my context lives on another thread, how do I call these commands if the command list is only supposed to be a collection of state and what command to use. I don't think that the render thread should do this and somehow add a task to the queue or am I wrong?
      Or could I do some variation where I do the loading in a thread with shared context and from there generate a command that has the handle to the resources needed.
       
      2. How do I know all my jobs are done.
      I'm working with C++, is this as simple as knowing how many objects there are in the scene, for every task that gets added increment a counter and when it matches aforementioned count I signal the renderer that the command list is ready? I was thinking a condition_variable or something would suffice to alert the renderthread that work is ready.
       
      3. Does all work come from a singular queue that the thread pool constantly cycles over?
      With the notion of jobs, we are basically sending the same work repeatedly right? Do all jobs need to be added to a single persistent queue to be submitted over and over again?
       
      4. Are resources destroyed with commands?
      Likewise with initializing and assuming #3 is correct, removing an item from the scene would mean removing it from the job queue, no? Would I need to send a onetime command to the renderer to cleanup?
    • By Finalspace
      I am starting to get into linux X11/GLX programming, but from every C example i found - there is this XVisualInfo thing parameter passed to XCreateWindow always.
      Can i control this parameter later on - when the window is already created? What i want it to change my own non GLX window to be a GLX window - without recreating. Is that possible?
       
      On win32 this works just fine to create a rendering context later on, i simply find and setup the pixel format from a pixel format descriptor and create the context and are ready to go.
       
      I am asking, because if that doesent work - i need to change a few things to support both worlds (Create a context from a existing window, create a context for a new window).
    • By DiligentDev
      This article uses material originally posted on Diligent Graphics web site.
      Introduction
      Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed.
      There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy:
      Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use.
      Overview
      Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components:
      Render device (IRenderDevice  interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.).
      Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context.
      An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread.
      The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs.
      In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary.
      Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen.
      Render device, device contexts and swap chain are created during the engine initialization.
      Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface.
      Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource.
      Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach.
      Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state.
      API Basics
      Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one  IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously.
      Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine.
      Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used.
      Initializing the Pipeline State
      As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once.
      Creating Shaders
      While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in:
      SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed.
      When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader );
      Creating the Pipeline State Object
      After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows:
      PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion.
      Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes:
      // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = {     LayoutElement( 0, 0, 3, VT_FLOAT32, False ),     LayoutElement( 1, 0, 4, VT_UINT8, True ),     LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method:
      // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed.
      Binding Shader Resources
      Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above).
      Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()):
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them.
      Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created:
      m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible.
      As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table.
      This post gives more details about the resource binding model in Diligent Engine.
      Setting the Pipeline State and Committing Shader Resources
      Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context:
      m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages.
      The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually.
      Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking.  When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources:
      m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them.
      Invoking Draw Command
      The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood:
      ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension.
      Source Code
      Full engine source code is available on GitHub and is free to use. The repository contains tutorials, sample applications, asteroids performance benchmark and an example Unity project that uses Diligent Engine in native plugin.
      Atmospheric scattering sample demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc.

      Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

      Finally, there is an example project that shows how Diligent Engine can be integrated with Unity.

      Future Work
      The engine is under active development. It currently supports Windows desktop, Universal Windows, Linux, Android, MacOS, and iOS platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and Metal backend is in the plan.
  • Advertisement