Sign in to follow this  

Video Card caps

This topic is 3594 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

You can obtain information on the device's capabilities via Direct3D by (for D3D9) using the IDirect3D9::GetDeviceCaps() API.

To obtaint the amount of VRAM available you can also use ddraw for information, the code is a little bit trickier than that.

Best Regards,
Porthos

Share this post


Link to post
Share on other sites
You can get all sorts of information with glGetInteger and glGetString but you can't get VRAM. FOr VRAM, there are OS specific ways like looking in the registry or using DirectDraw.

Share this post


Link to post
Share on other sites
Quote:
Original post by theMachinimator
Dang... we're using Java and we need to support PC, Mac and Linux with no special-case code.
I think that's a bit of a pipe dream [grin]

on windows you can use the Windows Management Instrumentation (WMI)

on Linux/MacOS you might have to do some lspci magic - I'm not linux expert btw.

The most reliable way (which is also probably the weakest in terms of coding metrics such as reliability and scalability) is to maintain a database of device caps or query an online one somewhere, though don't quote me on that "solution"

Share this post


Link to post
Share on other sites
Quote:
Original post by theMachinimator
Dang... we're using Java and we need to support PC, Mac and Linux with no special-case code.

Why do you think you need to know the vram size? OpenGL deliberately doesn't expose this information because you're not supposed to know (or care) where textures actually live and what your actual server/vram size is. I suspect you don't actually need to know, you just think you do. [grin]

On the other hand, if you really really want to know then you can mess around with the glAreTexturesResident function.

Share this post


Link to post
Share on other sites
the thing about memory is ards nowadays can use system RAM as memory so the points not valid like it used to be, what u should be concerned with is, does it run fast enuf?

Share this post


Link to post
Share on other sites
I'm going to suggest that you don't rely glAreTexturesResident because on some drivers, it just returns 1 no matter what, even if you create 1 gig of textures.

Share this post


Link to post
Share on other sites
Quote:
Original post by OrangyTang
Quote:
Original post by theMachinimator
Dang... we're using Java and we need to support PC, Mac and Linux with no special-case code.

Why do you think you need to know the vram size? OpenGL deliberately doesn't expose this information because you're not supposed to know (or care) where textures actually live and what your actual server/vram size is. I suspect you don't actually need to know, you just think you do. [grin]

On the other hand, if you really really want to know then you can mess around with the glAreTexturesResident function.


Our app needs to manage resources. For instance, we'd like to know how big and how many auxiliary render buffers / textures we can allocate. Users are free to load as many assets as they wish, so we definitely need to keep a track on how much we're using.

I care where textures live because the reality is that (IIRC) the ones that are system-RAM resident start causing bus contentions when they're accessed.

Share this post


Link to post
Share on other sites
Quote:
Original post by theMachinimator
Quote:
Original post by OrangyTang
Quote:
Original post by theMachinimator
Dang... we're using Java and we need to support PC, Mac and Linux with no special-case code.

Why do you think you need to know the vram size? OpenGL deliberately doesn't expose this information because you're not supposed to know (or care) where textures actually live and what your actual server/vram size is. I suspect you don't actually need to know, you just think you do. [grin]

On the other hand, if you really really want to know then you can mess around with the glAreTexturesResident function.


Our app needs to manage resources. For instance, we'd like to know how big and how many auxiliary render buffers / textures we can allocate. Users are free to load as many assets as they wish, so we definitely need to keep a track on how much we're using.

I care where textures live because the reality is that (IIRC) the ones that are system-RAM resident start causing bus contentions when they're accessed.

So how will you deal with chips like the intel ones which don't have dedicated video memory and so will have entirely different behavior? I'd suggest that really what you want is a user-configurable "texture cache size" so they can tweak it for their own setup. Possibly selecting a reasonable default by looking at the GL_VENDOR and GL_RENDERER strings and taking a guess at a suitable starting value.

Share this post


Link to post
Share on other sites
Quote:
Original post by theMachinimator
I care where textures live because the reality is that (IIRC) the ones that are system-RAM resident start causing bus contentions when they're accessed.


That's not your problem, it's the driver manufacturers problem. They're usually better equipped to handle it than someone working in usermode as well.

Share this post


Link to post
Share on other sites
Quote:
I'd suggest that really what you want is a user-configurable "texture cache size" so they can tweak it for their own setup. Possibly selecting a reasonable default by looking at the GL_VENDOR and GL_RENDERER strings and taking a guess at a suitable starting value.

yes this is the best method, a lot of games have this eg doom3/crysis
low->medium->high->ultra
sized textures 256x256->512->1024->2048

Share this post


Link to post
Share on other sites
Quote:
Original post by theMachinimator
Our app needs to manage resources. For instance, we'd like to know how big and how many auxiliary render buffers / textures we can allocate.
I just feel the need to point out that AUX buffers are seldom HW-supported. There's no problem if you took this as an example, but if you really mean to use them... be warned!
Quote:
Original post by theMachinimator
Users are free to load as many assets as they wish, so we definitely need to keep a track on how much we're using.
This is trivial. What is not is to keep track of the remaining resources. I want to recall that new[] or malloc() don't give you this info. They'll simply ret NULL when running out. Why you need a more involved approach?
Quote:
Original post by theMachinimator
I care where textures live because the reality is that (IIRC) the ones that are system-RAM resident start causing bus contentions when they're accessed.
Not really. I've seen drivers hanging out completely, others scrapping all the colors. Some just refused to texImage. A few managed this well enough to not be a huge problem.

The bottom line is that people building assets should know how much RAM they have and act appropriately. What kind of 'user' are you referring to?

Share this post


Link to post
Share on other sites
Quote:
Original post by Krohm
Quote:
Original post by theMachinimator
Our app needs to manage resources. For instance, we'd like to know how big and how many auxiliary render buffers / textures we can allocate.
I just feel the need to point out that AUX buffers are seldom HW-supported. There's no problem if you took this as an example, but if you really mean to use them... be warned!

Aux buffers have been hardware supported since the GF6xxx and later (and equivilent ATi cards). Support is quite widespread now.

Quote:
What is not is to keep track of the remaining resources. I want to recall that new[] or malloc() don't give you this info. They'll simply ret NULL when running out. Why you need a more involved approach?

While malloc() does returns NULL when out of memory, new/new[] throw an exception (unless you explicitly use the no_throw version).

Share this post


Link to post
Share on other sites
Quote:
Original post by Krohm
I just feel the need to point out that AUX buffers are seldom HW-supported. There's no problem if you took this as an example, but if you really mean to use them... be warned!


Well we can detect the presence of FBOs. We have a basic fixed-function GL path in our code for the situations where our cleverer code can't operate. It's not at all clear to me how many off-screen render target textures I can allocate though. Many graphics techniques pretty much require one on their own. Plus we're implementing deferred shaders so we need 4 screen-size textures. I'd like to know if I can safely allocate such beasts ahead of time.

Quote:

This is trivial. What is not is to keep track of the remaining resources.


Quite!

Quote:

I want to recall that new[] or malloc() don't give you this info. They'll simply ret NULL when running out. Why you need a more involved approach?


Running till you can't run any more doesn't seem like a good strategy. We want to know what sort of facilities we can offer the user with their machine from the moment the code boots up.

Quote:

What kind of 'user' are you referring to?


Our product - Moviestorm - is a tool for creating machinima, virtual movies on your PC. Our users can be anyone from a teenager to a pro in the movie business. We have terabytes of assets and users are free to load as many as they like into a set. We offer effects like depth-of-field and shadows which require copious resources, and as I said, deferred rendering is there awaiting switch-on. Not all users will have access to all functionality, and we want to be able to work out what we can offer when the launcher app kicks in.

Share this post


Link to post
Share on other sites
Quote:
Original post by OrangyTang
Aux buffers have been hardware supported since the GF6xxx and later (and equivilent ATi cards). Support is quite widespread now.
I apologize. Thank you for correcting me, I must admit it's quite a while I don't perform extensive testing on them.
Quote:
Original post by OrangyTang
While malloc() does returns NULL when out of memory, new/new[] throw an exception (unless you explicitly use the no_throw version).
The point is that they simply bail out in a way or the other, how they do that it's basically irrelevant in this context.
Quote:
theMachinimator
Well we can detect the presence of FBOs. We have a basic fixed-function GL path in our code for the situations where our cleverer code can't operate. It's not at all clear to me how many off-screen render target textures I can allocate though. Many graphics techniques pretty much require one on their own.
That's just better. AUX have nothing to do with FBOs. There's no theorical limit on FBO allocation although only a few can be used, depending on MRT functionality.
Quote:
Original post by OrangyTang
Running till you can't run any more doesn't seem like a good strategy. We want to know what sort of facilities we can offer the user with their machine from the moment the code boots up.

Our product - Moviestorm - is a tool for creating machinima, virtual movies on your PC. Our users can be anyone from a teenager to a pro in the movie business. We have terabytes of assets and users are free to load as many as they like into a set. We offer effects like depth-of-field and shadows which require copious resources, and as I said, deferred rendering is there awaiting switch-on. Not all users will have access to all functionality, and we want to be able to work out what we can offer when the launcher app kicks in.
Unluckly this isn't the industry standard for a reason. Even the less-technical artist will think at this. In GL there's no real way to know. You know the tale of the target hardware and the issues in managing wide audiences... you'll hardly fit the most flexible architecture with the most performing and feature-rich.

Unfortunatly I had faced a similar problem about two years ago and I had to go for D3D that time. It's measurements are not 100% accurate but you can definetly trust them to a good degree.

Share this post


Link to post
Share on other sites

This topic is 3594 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this