Jump to content
  • Advertisement
Sign in to follow this  
Mantear

OpenGL Supported features detection

This topic is 4337 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Greetings! I've begun adding more flexibility to my program to allow it to run on systems with varying degrees of OpenGL support. For example, I'm now able to set a parameter to change the mode in which my objects are drawn (direct, display list, vertex array, or vertex buffer object). I'm also able to obtain the current system information via glGetString, so I know what OpenGL version is supported (1.4 in the case of my laptop) and I have a list of supported extentions. However, I'm still falling short of enough information to know exactly what is supported and what is not. I'm not nearly fluent enough in OpenGL to know what features entered the core GL at what revision. So if I'm testing on a system with 1.4 and a feature supported via extensions, I don't know if it is a core feature of 2.0 and won't show up in the extensions list. Same goes for the other way. I don't have a good grasp of what is in the core 1.4 feature set that is only supported via extensions in earlier versions, say, 1.1. I believe there is a GLUT function that will return whether or not the given feature is supported, but I'd like to create a system that can perform such checks myself in order to become more familiar with what features are all available in OpenGL and when they became available. So, finally, my main question is if there is a master list somewhere that lists all the extensions, what OpenGL versions they apply to, and if/when they became part of the core GL. Thanks!

Share this post


Link to post
Share on other sites
Advertisement
Look into using GLEW or GLEE, perhaps? I'm not sure if it would address all of your needs, but it might be a good start.

As far as in what version of OpenGL the extensions got officially added to OpenGL core, you can always find that information on the extension's spec page.

Another program you might wanna try is called "OpenGL Extension Viewer," I'm not sure if I like it too much, but it does list all of the OpenGL core functions and extensions, tells you which ones are supported on your machine, and gives you other, possibly useful info. Look into trying it if you want.

Share this post


Link to post
Share on other sites
I currently use GLee to gain access to all the extensions available on the current machine. My goal is to be able to select the optimal rendering path available, even if I move to a different machine. Just because VBOs are available on my building machine, if I move the program to an older machine and don't detect the available features and only try VBOs, the program may not work because I need to drop back to an older drawing method.

Is the extension's spec page you mentioned on the opengl.org website? I'll look for it. I'll also check out the Extensions Viewer. Thanks!

Share this post


Link to post
Share on other sites
Thanks for the help so far.

I'm still a bit confused as to how the naming convensions work. Let me see if I understand.

When a company makes a new extension (NVidia, ATI, etc), they might make it available such as "GL_NV_point_sprite". If the extension becomes widely accepted and expected to become part of the next core release, it would then become "GL_ARB_point_sprite". Once the next release comes out, it is part of the core and no longer needs the "GL_ARB_point_sprite", but the extension is kept (at least for a few versions) to aid in compatibility. Is any of this correct?

How then does an extension get the "GL_EXT_*" naming convention? Any explaination on this would be greatly appreciated.

Share this post


Link to post
Share on other sites
Quote:
Original post by Mantear
Thanks for the help so far.

I'm still a bit confused as to how the naming convensions work. Let me see if I understand.

When a company makes a new extension (NVidia, ATI, etc), they might make it available such as "GL_NV_point_sprite". If the extension becomes widely accepted and expected to become part of the next core release, it would then become "GL_ARB_point_sprite". Once the next release comes out, it is part of the core and no longer needs the "GL_ARB_point_sprite", but the extension is kept (at least for a few versions) to aid in compatibility. Is any of this correct?

How then does an extension get the "GL_EXT_*" naming convention? Any explaination on this would be greatly appreciated.

My understanding is EXT_* is for multivendor extension, which is supported by more than 1 vendor. Then, it finally becomes ARB_* after approved by OpenGL committee (ARB).

Share this post


Link to post
Share on other sites
Quote:
Original post by Mantear
I'm still a bit confused as to how the naming convensions work.

There's a wonderful contribution from Myopic Rhino on "Moving Beyoned OpenGL 1.1 for Windows," in which the second page talks about extensions in some detail.

He says it far more succinctly and authoritatively than I can, so I'll just refer you there. [smile]
-jouley

Share this post


Link to post
Share on other sites
Hmmm, this could get tricky. As extensions get promoted, it seems that the previous extension becomes un-supported. If there is a "GL_NV_<some_extention>" that ATI picks up, it'll become "GL_EXT_<some_extention>", and the previous one will go away. However, the EXT version might have a different implementation/syntax to it. So using new extentions could be trying to hit a moving target.

Right now my versioning scheme is as follows. If there's a feature I want to use, say VBOs, I first check if the Major/Minor OpenGL version, gotten via glGetString(GL_VERSION), running on the system includes it as part of the core GL (1.5 or above). If I'm running on an older system, say 1.4, I'll search for "GL_ARB_vertex_buffer_object" in the returned buffer from glGetString(GL_EXTENSIONS). If that string is not found, I'll have to determine not to use VBOs and fall back to a different rendering path. It's possible that there may have been a "GL_<EXT/NV/ATI>_vertex_buffer_object" at one time that some system out there supports and could use to allow my program to use VBOs, but it's no longer well supported (since "GL_ARB_vertex_buffer_object" over-took it, which itself at some time will go away since it is now part of the core GL). So unless I want to get crazy into legacy video card support I'm going to have to admit defeat and not use VBOs on some older cards that could have supported them in some fashion.

Please let me know if I have a misunderstanding of any of this, or if there's a better way to determine what OpenGL features I have available to use. Thanks!

Share this post


Link to post
Share on other sites
i have an nvidia 7 series with opengl i have problem with chronicles of ridick
it's asking me for opengl 1.3+ to run
any idea how to solve that problem?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!