A question about GLEW Windows setup

Started by
24 comments, last by NicoG 12 years, 8 months ago
Brother Bob, I guess I misunderstood you.

capricorn, you say that it is implementation specific. I am speaking in terms of what I have experienced.
Sure, keep multiple sets of pointers because according to MSDN, you should.

As for GLEW, IIRC, when I setup my multisampled pixelformat on my window and called glewInit again, it farted. Since glewInit did not like to be called twice, I commented out the one for my multisampled Window.
There is the multiple context GLEW (GLEW MX) but I didn't bother with it.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Advertisement

capricorn, you say that it is implementation specific. I am speaking in terms of what I have experienced.
Sure, keep multiple sets of pointers because according to MSDN, you should.


I didn't mean to offend, and I apologize if that is the case. I as well have not encountered any problems with contexts in the past, but only recently I realized that it's more luck than anything.

What I meant doesn't only concern MSDN's wglGetProcAddress documentation or Microsoft at all. Here's an excerpt from wglCreateContextAttribs spec (exactly the same holds for glXCreateContextAttribs too):


[font="monospace"]All OpenGL 3.2 implementations are required to implement the core profile, but implementation of the compatibility profile is optional.[/font]

[font="monospace"]...[/font]

[font="monospace"]The legacy context creation routines can only return OpenGL 3.1 [/font][font="monospace"]contexts if the GL_ARB_compatibility extension is supported, and can [/font][font="monospace"]only return OpenGL 3.2 or greater contexts implementing the [/font][font="monospace"]compatibility profile. This ensures compatibility for existing[/font][font="monospace"] applications. However, 3.0-aware applications are encouraged to use[/font][font="monospace"] wglCreateContextAttribsARB instead of the legacy routines. [/font]
[/quote]
This implies that drivers do not have to even expose modern GL functionality when you create context via wglCreateContext (or glXCreateContext). Yes, it seems that currently, both major vendors create compatibility profile of maximum-supported GL versions for legacy contexts, so there seems to be nothing to worry about. Furthermore, nVidia clearly states that it will continue to do so in the future. I've seen no similar claims from AMD.

Here is what happens if I simulate the situation where there is no compatibility profile, thus wglCreateContext creates 2.1 context. I manually create two contexts, 2.1 and 4.1 and try to fetch glDrawArraysInstanced function from both. Here's the code (written in D2):



auto dc = GetDC(hwnd);

int[] attribs21 =
[
WGL_CONTEXT_MAJOR_VERSION_ARB, 2,
WGL_CONTEXT_MINOR_VERSION_ARB, 1,
0,
];

int[] attribs41 =
[
WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
WGL_CONTEXT_MINOR_VERSION_ARB, 1,
WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
0,
];

auto rc21 = wglCreateContextAttribsARB(dc, null, attribs21.ptr);
auto rc41 = wglCreateContextAttribsARB(dc, null, attribs41.ptr);

enum testFunc = "glDrawArraysInstanced";
// Try to fetch glDrawArraysInstanced from 2.1 context
wglMakeCurrent(dc, rc21);
writeln("GL", to!string(cast(char*)glGetString(GL_VERSION)), ", ", testFunc, " address == ", wglGetProcAddress(testFunc));
// Try to fetch glDrawArraysInstanced from 4.1 context
wglMakeCurrent(dc, rc41);
writeln("GL", to!string(cast(char*)glGetString(GL_VERSION)), ", ", testFunc, " address == ", wglGetProcAddress(testFunc));


Here are the results (GeForce GTS 450 with 275.33 driver):


GL2.1.2, glDrawArraysInstanced address == 0
GL4.1.0, glDrawArraysInstanced address == 69A31840
[/quote]

As you can see, should the driver create 2.1 context instead of compatibility profile of modern one, blind loading of all GL functions on legacy context creation stops working. I would suggest that similar behavior is possible for functions removed from modern version, though on my setup I get the same address for, say, glVertexPointer, it simply behaves as it should on 3.1+ (i.e. returns GL_INVALID_OPERATION). Why I think so is because technically, maintaining the same function pointer in this case must come with an overhead (i.e. version comparison), albeit currently negligible. The more functions will get removed, the more it will become noticeable for programs that still use them (i.e. today's software).

Again, at the current moment, I think both AMD and nVidia create a compatibility profile for legacy contexts. How it will hold in the future is entirely up to them. Personally, I think that one's better off counting on specs rather than on various company claims, because the company can't override specification due to, say, "marketing reasons" and stay compliant. If this sounds only like a matter of preference, then I prefer to not have a headache in a year or two from unexpected bugs on new hardware/drivers.
WBW, capricorn
No, you didn't offend me.
My point is that there is 1 single driver and it will always return the same pointer no matter what hw accelerated pixel format you chose.

I'm not sure why you would create a GL 2.1 context and call wglGetProcAddress("glDrawArraysInstanced"). That doesn't make any sense.It would be like calling wglGetProcAddress("EatCrap") and expecting to get a valid pointer.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

No, you didn't offend me.


I'm glad to hear it. English is not my native, and I can't say I'm good at "toning" my messages right, so I was afraid I'm ruining the discussion.


My point is that there is 1 single driver and it will always return the same pointer no matter what hw accelerated pixel format you chose.
[/quote]
The issue I'm talking about is not related to pixel formats at all. But the driver is a whole different story. You say there is one single driver. But in fact, there is one current driver version. What will the new driver version do in half a year? The driver for a next OS version? What will the driver from the same manufacturer, but made for a different platform, do? How can you be sure it'll behave exactly as it does now, considering that specification allows it to behave differently?


I'm not sure why you would create a GL 2.1 context and call wglGetProcAddress("glDrawArraysInstanced"). That doesn't make any sense.It would be like calling wglGetProcAddress("EatCrap") and expecting to get a valid pointer.
[/quote]
Yeah, I can see I wasn't clear enough. I'll try to elaborate. The common practice to use modern GL is:

1) create a legacy context via wglCreateContext
2) load all GL function pointers (including "non-legacy" ones, i.e. 3.0+ functions)
3) create modern GL context
4) use modern GL functionality

According to the specs, there is a flaw in this approach. The drivers don't have to provide modern GL functionality for legacy contexts. They may provide it, and they currently do, but that is not a requirement. In other words, wglCreateContext (or glXCreateContext, or whatever it is on Mac) may either return 3.1+ compatibility profile context, or 3.0 (or even earlier) one. With my test case I tried to demonstrate what would happen if the driver didn't provide compatibility profile. Namely, point (2) from the above list wouldn't work as it should: not all GL function pointers would be loaded. I took version 2.1 just as a common pre-GL3 version. I might as well have taken 3.0 and some 3.3 function instead. Today's drivers do implement compatibility profile, which means they can provide modern contexts for legacy context creation calls, and thus allow for the above point (2) to work. Maybe they'll continue doing so in the future, maybe they won't. Well, nVidia currently claims it will. AMD - I don't know. Other vendors - I don't know. Sticking with the specs (and loading functions according to GL version) would make the developer a bit (in my case - a lot) happier if the driver vendor drops compatibility profile at some point.
WBW, capricorn

The issue I'm talking about is not related to pixel formats at all. But the driver is a whole different story. You say there is one single driver. But in fact, there is one current driver version. What will the new driver version do in half a year? The driver for a next OS version? What will the driver from the same manufacturer, but made for a different platform, do? How can you be sure it'll behave exactly as it does now, considering that specification allows it to behave differently?


That's fine. Code your program to handle the situation in case you get a different set of pointers for different pixel formats. That is what everyone is suppose to be doing anyway. I think you got confused when I said that their is 1 driver and hw accelerated pixel formats, you get the same pointer. I was only speaking in terms of my observation.

According to the specs, there is a flaw in this approach. The drivers don't have to provide modern GL functionality for legacy contexts


That's not a flaw. The spec is always right.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
That is how I do my context for OpenGL3:



NLContextWin32::NLContextWin32(NLWindowHandle parent, NLOpenGLSettings settings)
: NLIPlatformContext(parent, settings)
{
int pf = 0;
PIXELFORMATDESCRIPTOR pfd = {0};
OSVERSIONINFO osvi = {0};
osvi.dwOSVersionInfoSize = sizeof(OSVERSIONINFO);

// Obtain HDC for this window.
if (!(m_hdc = GetDC((HWND)parent)))
{
NLError("[NLContextWin32] GetDC() failed.");
throw NLException("GetDC() failed.", true);
}

// Create and set a pixel format for the window.
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = settings.BPP;
pfd.cDepthBits = settings.BPP;
pfd.iLayerType = PFD_MAIN_PLANE;

// Obtain Windows Version
if (!GetVersionEx(&osvi))
{
NLError("[NLContextWin32] GetVersionEx() failed.");
throw NLException("[NLContextWin32] GetVersionEx() failed.");
}

// Get a pixelformat, based on our settings
pf = ChoosePixelFormat(m_hdc, &pfd);

// Set the pixelformat
if (!SetPixelFormat(m_hdc, pf, &pfd))
{
NLError("[NLContextWin32] GetVersionEx() failed.");
throw NLException("[NLContextWin32] SetPixelFormat() failed.");
}

// When running under Windows Vista or later support desktop composition.
// This doesn't really apply when running in full screen mode.
if (osvi.dwMajorVersion > 6 || (osvi.dwMajorVersion == 6 && osvi.dwMinorVersion >= 0))
pfd.dwFlags |= PFD_SUPPORT_COMPOSITION;

// Verify that this OpenGL implementation supports the extensions we need
std::string extensions = wglGetExtensionsStringARB(m_hdc);
if (extensions.find("WGL_ARB_create_context") == std::string::npos){
NLError("[NLContextWin32] Required extension WGL_ARB_create_context is not supported.");
throw NLException("[NLContextWin32] Required extension WGL_ARB_create_context is not supported.");
}

int attribList[] =
{
WGL_CONTEXT_MAJOR_VERSION_ARB, settings.MAJOR,
WGL_CONTEXT_MINOR_VERSION_ARB, settings.MINOR,

WGL_ACCELERATION_ARB, WGL_FULL_ACCELERATION_ARB,
WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
WGL_COLOR_BITS_ARB, 32,
WGL_DEPTH_BITS_ARB, 24,
0
};

// First try creating an OpenGL context.
if (!(m_hglrc = wglCreateContextAttribsARB(m_hdc, 0, attribList)))
{
// Fall back to an OpenGL 3.0 context.
attribList[3] = 0;
if (!(m_hglrc = wglCreateContextAttribsARB(m_hdc, 0, attribList))){
NLError("[NLContextWin32] wglCreateContextAttribsARB() failed for OpenGL 3 context.");
throw NLException("[NLContextWin32] wglCreateContextAttribsARB() failed for OpenGL 3 context.", true);
}
}

if (!wglMakeCurrent(m_hdc, m_hglrc)){
NLError("[NLContextWin32] wglMakeCurrent() failed for OpenGL 3 context.");
throw NLException("[NLContextWin32] wglMakeCurrent() failed for OpenGL 3 context.");
}

// Load wglSwapIntervalExt
typedef BOOL (APIENTRY * PFNWGLSWAPINTERVALEXTPROC)(int);
static PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = 0;
wglSwapIntervalEXT = reinterpret_cast<PFNWGLSWAPINTERVALEXTPROC>(wglGetProcAddress("wglSwapIntervalEXT"));

if ( wglSwapIntervalEXT )
{
if ( settings.VSYNC == true )
{
wglSwapIntervalEXT(1);
}
else if ( settings.VSYNC == false )
{
wglSwapIntervalEXT(0);
}
}
else if (wglSwapIntervalEXT == NULL )
{
NLWarning("[NLContextWin32] Cannot load wglSwapIntervalEXT");
}
}


A word of advice:
Don't use GLew for OpenGL 3.x. It will cause glErrors() to appear.
Use gl3w instead:
https://github.com/skaslev/gl3w/wiki
If you say "pls", because it is shorter than "please", I will say "no", because it is shorter than "yes"
http://nightlight2d.de/

This topic is closed to new replies.

Advertisement