• Advertisement
Sign in to follow this  

Checking if an extension is hardware accelerated?

This topic is 4215 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey all, I just found out that my poor old GerForce 2 card doesn't support hardware accelerated vertex shaders and acutally runs them through software emulation by the NVidia drivers, which is a real shame. However after through some major searching, I couldn't find anything that would tell me how to check if an extension runs in software emulation or is hardware accelerated, in a similar manner to the DirectX caps checks. So I put the question here, is it possible in GL to check if an extension runs in software emulation and if so, how do you do it? Cheers, Jawad

Share this post


Link to post
Share on other sites
Advertisement
OpenGL deliberately doesn't let you have this information. All you care about is that it supports the extension or not.

This is actually a good thing, as often features can be emulated but still perform at full speed (in fact on some chips emulation can be faster than if it was done in hardware - particularly on the intergrated intel chips where everything is using the same memory).

Bottom line is you only care if it's "fast enough". Which means either running some quick benchmarks when your app starts up (which I always think is overkill) or just having an option somewhere that your users can toggle to improve performance.

Share this post


Link to post
Share on other sites
Oh well, that's a shame. I guess since I know that GeForce2's in particular don't have hardware accelerated vertex shaders I can adjust my app according to the vendor information.

Thnaks for the help though.

Share this post


Link to post
Share on other sites
What would you do differently if you could tell which extensions are done in software?

Share this post


Link to post
Share on other sites
on nvidia cards if the extension is in the extension string then its done in hardware.
though OrangyTang is correct, speed is what counts not if its done in hardware or not, eg ive tried out some game demos on my athlon 2.0ghz with mesa + they run at ~20fps, with multicores in the future this is only gonna improve

Share this post


Link to post
Share on other sites
Quote:
Original post by zedzeek
on nvidia cards if the extension is in the extension string then its done in hardware.

Actually, it isn't that simple. THe prime example would be the one the OP put forward - ARB_vertex_shader is part of the extension string of my GF4MX (and I am sure his GF2) but as he correctly says, vertex shaders are not run on the GPU. Having said that, I have always found the emulated ones to be very fast for all my purposes.

There are other things as well, VBO is said to be supported on my (PCI) card but it never runs faster than standard vertex arrays. (It's not my code, I asked people on this very forum to test smoe time back [smile].) There's something funky there as well - but again it runs at an acceptable frame rate, so I ghuess you can use it.

Similarly, for OpenGL core features, you have to check a) the OpenGL version and b) the extension string for the corresponding extension, before you start using it.

Share this post


Link to post
Share on other sites
r u sure, i just checked the nvidia gl specs + it saiz nv1x support it (in hardware)
others eg arb_occlusion_query etc whilst supported are done in hardware

Share this post


Link to post
Share on other sites
Quote:
Original post by mattst88
What would you do differently if you could tell which extensions are done in software?


Well for my particular application, I'm rendering a 256x256 terrain. When going through a fixed function pipeline I get around 20-60 fps. However, using a simple vertex shader, which just performs the vertex transform and outputs colour and tex co-ords, I get 3-6 fps, which isn't good. Although I would like to still use vertex shaders if it is hardware accelerated, hence why I'd like to check.

Quote:
Original post by zedzeek
on nvidia cards if the extension is in the extension string then its done in hardware.
though OrangyTang is correct, speed is what counts not if its done in hardware or not, eg ive tried out some game demos on my athlon 2.0ghz with mesa + they run at ~20fps, with multicores in the future this is only gonna improve


GeForce FAQ

This is how I know it's software emulated for GeForce 2. Check Question 26.

Share this post


Link to post
Share on other sites
Quote:
Original post by deavik
There are other things as well, VBO is said to be supported on my (PCI) card but it never runs faster than standard vertex arrays. (It's not my code, I asked people on this very forum to test smoe time back [smile].) There's something funky there as well - but again it runs at an acceptable frame rate, so I ghuess you can use it.


Well, its not really funky, all it means is the driver is keeping the data in system ram and streaming it as it would for a VA. Perfectly acceptable if the card can't deal with the data in VRAM [smile]

Share this post


Link to post
Share on other sites
Quote:
Original post by phantom
Well, its not really funky, all it means is the driver is keeping the data in system ram and streaming it as it would for a VA. Perfectly acceptable if the card can't deal with the data in VRAM [smile]

I have no idea, but I wonder if that is indeed correct - that would mean the 64 mb vram on my card, is only used for textures [crying]. It's the same "no performance boost" with PBO as well, most probably you're right and Buffer Objects are created in system memory.

Share this post


Link to post
Share on other sites
Quote:
Original post by jawadx
Quote:
Original post by mattst88
What would you do differently if you could tell which extensions are done in software?


Well for my particular application, I'm rendering a 256x256 terrain. When going through a fixed function pipeline I get around 20-60 fps. However, using a simple vertex shader, which just performs the vertex transform and outputs colour and tex co-ords, I get 3-6 fps, which isn't good. Although I would like to still use vertex shaders if it is hardware accelerated, hence why I'd like to check.

You seem to have missed the point. Whether a feature is hardware accelerated or not is moot, it's whether it's fast enough or not.

For your particular card, and your particular drivers, and your particular cpu, then the emulated version does appear to be slower. But someone with a faster cpu or different drivers might find that it's actually faster. Particularly if someone is using an intergrated intel chip where the emulation of non-hardware features is pretty damn good.

The best you can do is to do a quick benchmark on game startup to determine whats the most optimal path for that particular hardware configuration. Since that is something of a pain I tend to go for sensible defaults (ie. if it's listed as an extesion then use it) with an option to change it in the config menu. If you're really paranoid you could scan the GL_RENDERER string and try and pick defaults based on that. And again I must emphasise - pick defaults rather than a fixed setup, because your method will not be accurate and there will be cases where the user knows better than your dumb algorithm.

Share this post


Link to post
Share on other sites
Quote:
Original post by jawadx
GeForce FAQ
This is how I know it's software emulated for GeForce 2. Check Question 26.

i stand corrected, this means their nvopenglspec.pdf is wrong then (or inversely its right + the faq is wrong :) )

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement