Sign in to follow this  

OpenGL Windows OpenGL and choosing from multiple video cards

This topic is 2079 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts


I've run into a small issue on one machine with two video cards. The other is used mainly to run web solutions simultaneously with high-end stuff running on the main NVidia card. For some reason no matter how I initialize OpenGL window, it uses the low-end video card and drivers. I assume this is because it is OpenGL compatible as well as was actually installed prior to NVidia card, thus having its drivers showing up in the registries first.

I know this is mainly user/OS side driver/config issue, but I'd love to create a user-friendly way to go around this; Simply by allowing user choose the display driver from the ones available if the default doesn't suit his/her/its needs.

So, is there a way to force OpenGL Windows application to choose a different video card / driver as opposed to the one Windows defaults it to?

I didn't manage to find any articles / topics about this, but feel free to point me to one if this has already been discussed elsewhere.

Share this post

Link to post
Share on other sites
There are vendor- and platform-specific extensions that can do it for you. They're usually only available in work-station drivers, and unfortunately not in the drivers for desktop-versions of the cards. For example [url=""]WGL_NV_GPU_AFFINITY[/url]. I'm not sure if that actually works for multiple cards from different vendors though..

Normally, it usually(always?) uses the adapter that you have set as your primary monitor with the start-menu in the monitor control panel. As far as I know there's no reliable way to override that. It's rather annoying though, as you can actually select one monitor as primary, create one OpenGL context which will use that monitor, then go into the control panel and switch which monitor is primary while that OpenGL context is running, and afterward create one more OpenGL context, and you will have one context on each card.

I've actually tested that on my system with Windows 7 64 and one AMD and one NVidia card, and it works, but it's a pain and not really a viable solution.. though I guess you could change the primary monitor programmatically if you just want it for yourself. Since it's that simple there's _probably_ a way to hack it without changing the primary monitor too, but I don't know how.
Also, it's definitely not guaranteed to always work on other configurations. For example I don't know what happens if the same driver controls both cards.. in my case it's two completely different drivers for the two cards.

Share this post

Link to post
Share on other sites
Aren't you supposed to do this by calling EnumDisplayDevices to get the list of video cards and then using CreateDC to create HDCs on those devices? Then you can use wglChoosePixelFormat (which takes the HDC) to get a render context of the right type on the right device?

Share this post

Link to post
Share on other sites
[quote name='Katie' timestamp='1316243985' post='4862701']
Aren't you supposed to do this by calling EnumDisplayDevices to get the list of video cards and then using CreateDC to create HDCs on those devices? Then you can use wglChoosePixelFormat (which takes the HDC) to get a render context of the right type on the right device?

That doesn't work, it still creates the OpenGL context for the primary adapter on every system I've tried it (some drivers won't even allow a GL context to be created on such a DC). I've read that if you have two separate ATI cards in the same machine then it works by simply creating the window on the correct monitor, but I never had two ATI cards so I haven't tried it. I have tried with two NVidia cards and with 1 Nvidia + 1 ATI in almost every way I could think of and it never worked.

Share this post

Link to post
Share on other sites
I think that is because opengl32.dll send calls to the real driver, which is written by the vendor. So if your primary monitor is nvidia, it is the nvidia GL driver that execute commands and it certainly isn't made for your secondary card if it is from another vendor.

If both cards are from the same vendor, then it should work (AMD or nvidia).They have some "special" coding in their driver to handle multi card situations.

If they are different, then I think for the second card, it just runs the default ms driver (opengl32.dll). Am I correct?

Share this post

Link to post
Share on other sites
Let me bump this thread, because it looks very similar to the issue that I'm struggling with at the moment (see [url=""]this gamedev[/url] thread).

In the context of switchable graphics, I end up with the problem that opengl32.dll uses the wrong device if an accelerated window is created with CreateWindow. Although the display adapter (in my problematic case a Mobile Intel 4 Express Chipset) seems to be available with support for OpenGL 2.1 (I've checked this with [url=""]OpenGL Extension Viewer 4.0[/url]) the GL_RENDERER that is used for my application is Microsoft's Generic GDI renderer.

First idea, of course: Driver problem. However, at a closer look, one may come up with the idea that the system simply chooses the wrong candidate out of two (Intel, GDI). Therefore, an appropriate solution to that issue could be to choose the display device instead of using the default one. Well, as can be read in this thread, this is not that easy...

I stumbled across [url=""]this[/url] thread that describes a (hacky) solution how to choose the device manually. Unfortunately, it did not work for me, and I'm not really convinced that (if it [i]would[/i] work) this solution is robust, but this is the best I've found so far.

Another post on [url=""]stackoverflow[/url] sums up ideas also expressed in this thread.

It seems that there is a stable solution for this issue: the [url=""]OpenGL Extension Viewer 4.0[/url] I've mentioned above is able to switch between different display adapters, to show its OpenGL capabilities and(!) to run rendering tests. I wonder how this is possible!

Any ideas?

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By _OskaR
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
      #define NUM_LIGHTS 2
      struct Light
          vec3 position;
          vec3 diffuse;
          float attenuation;
      uniform Light Lights[NUM_LIGHTS];
    • By pr033r
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article ( inspirate from another code (here: but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch):
      Exe file (if you want to look) and models folder (for those who will download the sources):
      Thanks for any help...

  • Popular Now