Jump to content

  • Log In with Google      Sign In   
  • Create Account


Choosing specific GPU for OpenGL context?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 Ender1618   Members   -  Reputation: 242

Like
0Likes
Like

Posted 23 October 2013 - 03:06 PM

I have an issue with my application (Win7 64bit OpenGL 4.0), picking the wrong GPU on some peoples machine for OpenGL acceleration such as the Intel HD3000 embedded GPU vs the Nvidia or ATI GPU. HD3000 does not support OpenGL 4.0 (AFAIK), which is my min requirement, so the app fails to run.

 

BTW, my app is intended to be cross platform (but for right now Windows 7 is most important, then Linux, then Mac). 

 

I am currently creating my OpenGL 4.x context with the aid of SDL 1.2 (started this code base a while back) and glew. With SDL 1.2 there is no way to enumerate the available devices (GPUs) and select one. I remember back in my DX days, device enumeration and selection was supported.

 

Does anyone know if any other cross platform OpenGL context creating libraries such as SDL 2.0, SFML, GLFW, support device enumeration and device specific gl context creation (with glew support)?

 

My only work around right now is forcing the app to use the Nvidia card under the Nvidia control panel (or ATI), and turning off Intel Optimus at the bios level, neither of which (I think) can be automated. This is alot to ask of a user, and is a horrid kludge.

 

Thanks for any guidance.



Sponsor:

#2 Vilem Otte   Crossbones+   -  Reputation: 1393

Like
0Likes
Like

Posted 23 October 2013 - 05:57 PM

Well, as far as I remember there were extensions like: http://www.opengl.org/registry/specs/NV/gpu_affinity.txt and http://www.opengl.org/registry/specs/AMD/wgl_gpu_association.txt for WGL, these also have equivalent for GLX if you're under operating system (for working on multi-gpu systems - like Crossfire'd GPUs). Not sure whether this helps though - for this case.


Edited by Vilem Otte, 23 October 2013 - 06:03 PM.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com


#3 Erik Rufelt   Crossbones+   -  Reputation: 3370

Like
0Likes
Like

Posted 23 October 2013 - 10:40 PM

With optimus there is usually only a single GPU as far as application software can see, and the driver chooses what hardware to use depending on control panel settings or whether the computer is plugged in or whatever it wants.



#4 Promit   Moderators   -  Reputation: 6631

Like
0Likes
Like

Posted 23 October 2013 - 11:01 PM

With optimus there is usually only a single GPU as far as application software can see, and the driver chooses what hardware to use depending on control panel settings or whether the computer is plugged in or whatever it wants.

This. Optimus won't show separate GPUs. You may be able to do what you want through the NVAPI but no promises.



#5 swiftcoder   Senior Moderators   -  Reputation: 9860

Like
0Likes
Like

Posted 24 October 2013 - 09:28 AM

This. Optimus won't show separate GPUs.

However, if you specify a mode that the integrated part doesn't support, Optimus should auto-select the discrete GPU.

It might not always work - that's why the NVidia control panel lets you force one or the other.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#6 Ender1618   Members   -  Reputation: 242

Like
0Likes
Like

Posted 24 October 2013 - 11:43 AM

Other than Optimus GPU switching issue (say for example we just shut it off in the bios), do any of the cross platform OpenGl Context creating libraries out there such as SDL2, SFML, or GLFW, support device enumeration and device specific GL context creation (with glew support, since i would like to use OGL 4.0 min or OGL 4.3 if available)?

 

From their online documentation, I haven't found it obvious if they do or do not support this (haven't gone into much depth).

 

Or is it just beyond there control fundamentally?



#7 Dark Helmet   Members   -  Reputation: 173

Like
0Likes
Like

Posted 24 October 2013 - 05:09 PM

Too bad we're talking Windows here not Linux. There you just setup one GPU per screen, create an X connection on the appropriate screen, and create a GL context on that connection (on NVidia at least). Pretty simple.

#8 samoth   Crossbones+   -  Reputation: 4684

Like
0Likes
Like

Posted 24 October 2013 - 05:18 PM

 

With optimus there is usually only a single GPU as far as application software can see, and the driver chooses what hardware to use depending on control panel settings or whether the computer is plugged in or whatever it wants.

This. Optimus won't show separate GPUs. You may be able to do what you want through the NVAPI but no promises.

 

 

But, but... isn't that always the case anyway, even without optimus?

 

The only OpenGL context that you can create to my knowledge is one that belongs to a device context. The device context, on the other hand, belongs to the one GPU (or, in the case of dual GPU the manufacturer and GPU model, though it may be one GPU or the other, identical one) that Windows uses to render the desktop with. No?






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS