Jump to content
  • Advertisement
Sign in to follow this  
jawadx

Checking if an extension is hardware accelerated?

This topic is 4394 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey all, I just found out that my poor old GerForce 2 card doesn't support hardware accelerated vertex shaders and acutally runs them through software emulation by the NVidia drivers, which is a real shame. However after through some major searching, I couldn't find anything that would tell me how to check if an extension runs in software emulation or is hardware accelerated, in a similar manner to the DirectX caps checks. So I put the question here, is it possible in GL to check if an extension runs in software emulation and if so, how do you do it? Cheers, Jawad

Share this post


Link to post
Share on other sites
Advertisement
OpenGL deliberately doesn't let you have this information. All you care about is that it supports the extension or not.

This is actually a good thing, as often features can be emulated but still perform at full speed (in fact on some chips emulation can be faster than if it was done in hardware - particularly on the intergrated intel chips where everything is using the same memory).

Bottom line is you only care if it's "fast enough". Which means either running some quick benchmarks when your app starts up (which I always think is overkill) or just having an option somewhere that your users can toggle to improve performance.

Share this post


Link to post
Share on other sites
Oh well, that's a shame. I guess since I know that GeForce2's in particular don't have hardware accelerated vertex shaders I can adjust my app according to the vendor information.

Thnaks for the help though.

Share this post


Link to post
Share on other sites
on nvidia cards if the extension is in the extension string then its done in hardware.
though OrangyTang is correct, speed is what counts not if its done in hardware or not, eg ive tried out some game demos on my athlon 2.0ghz with mesa + they run at ~20fps, with multicores in the future this is only gonna improve

Share this post


Link to post
Share on other sites
Quote:
Original post by zedzeek
on nvidia cards if the extension is in the extension string then its done in hardware.

Actually, it isn't that simple. THe prime example would be the one the OP put forward - ARB_vertex_shader is part of the extension string of my GF4MX (and I am sure his GF2) but as he correctly says, vertex shaders are not run on the GPU. Having said that, I have always found the emulated ones to be very fast for all my purposes.

There are other things as well, VBO is said to be supported on my (PCI) card but it never runs faster than standard vertex arrays. (It's not my code, I asked people on this very forum to test smoe time back [smile].) There's something funky there as well - but again it runs at an acceptable frame rate, so I ghuess you can use it.

Similarly, for OpenGL core features, you have to check a) the OpenGL version and b) the extension string for the corresponding extension, before you start using it.

Share this post


Link to post
Share on other sites
r u sure, i just checked the nvidia gl specs + it saiz nv1x support it (in hardware)
others eg arb_occlusion_query etc whilst supported are done in hardware

Share this post


Link to post
Share on other sites
Quote:
Original post by mattst88
What would you do differently if you could tell which extensions are done in software?


Well for my particular application, I'm rendering a 256x256 terrain. When going through a fixed function pipeline I get around 20-60 fps. However, using a simple vertex shader, which just performs the vertex transform and outputs colour and tex co-ords, I get 3-6 fps, which isn't good. Although I would like to still use vertex shaders if it is hardware accelerated, hence why I'd like to check.

Quote:
Original post by zedzeek
on nvidia cards if the extension is in the extension string then its done in hardware.
though OrangyTang is correct, speed is what counts not if its done in hardware or not, eg ive tried out some game demos on my athlon 2.0ghz with mesa + they run at ~20fps, with multicores in the future this is only gonna improve


GeForce FAQ

This is how I know it's software emulated for GeForce 2. Check Question 26.

Share this post


Link to post
Share on other sites
Quote:
Original post by deavik
There are other things as well, VBO is said to be supported on my (PCI) card but it never runs faster than standard vertex arrays. (It's not my code, I asked people on this very forum to test smoe time back [smile].) There's something funky there as well - but again it runs at an acceptable frame rate, so I ghuess you can use it.


Well, its not really funky, all it means is the driver is keeping the data in system ram and streaming it as it would for a VA. Perfectly acceptable if the card can't deal with the data in VRAM [smile]

Share this post


Link to post
Share on other sites
Quote:
Original post by phantom
Well, its not really funky, all it means is the driver is keeping the data in system ram and streaming it as it would for a VA. Perfectly acceptable if the card can't deal with the data in VRAM [smile]

I have no idea, but I wonder if that is indeed correct - that would mean the 64 mb vram on my card, is only used for textures [crying]. It's the same "no performance boost" with PBO as well, most probably you're right and Buffer Objects are created in system memory.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!