Video Card caps

Started by
16 comments, last by Krohm 16 years, 2 months ago
Quote:Original post by OrangyTang
Quote:Original post by theMachinimator
Dang... we're using Java and we need to support PC, Mac and Linux with no special-case code.

Why do you think you need to know the vram size? OpenGL deliberately doesn't expose this information because you're not supposed to know (or care) where textures actually live and what your actual server/vram size is. I suspect you don't actually need to know, you just think you do. [grin]

On the other hand, if you really really want to know then you can mess around with the glAreTexturesResident function.


Our app needs to manage resources. For instance, we'd like to know how big and how many auxiliary render buffers / textures we can allocate. Users are free to load as many assets as they wish, so we definitely need to keep a track on how much we're using.

I care where textures live because the reality is that (IIRC) the ones that are system-RAM resident start causing bus contentions when they're accessed.
Advertisement
Quote:Original post by theMachinimator
Quote:Original post by OrangyTang
Quote:Original post by theMachinimator
Dang... we're using Java and we need to support PC, Mac and Linux with no special-case code.

Why do you think you need to know the vram size? OpenGL deliberately doesn't expose this information because you're not supposed to know (or care) where textures actually live and what your actual server/vram size is. I suspect you don't actually need to know, you just think you do. [grin]

On the other hand, if you really really want to know then you can mess around with the glAreTexturesResident function.


Our app needs to manage resources. For instance, we'd like to know how big and how many auxiliary render buffers / textures we can allocate. Users are free to load as many assets as they wish, so we definitely need to keep a track on how much we're using.

I care where textures live because the reality is that (IIRC) the ones that are system-RAM resident start causing bus contentions when they're accessed.

So how will you deal with chips like the intel ones which don't have dedicated video memory and so will have entirely different behavior? I'd suggest that really what you want is a user-configurable "texture cache size" so they can tweak it for their own setup. Possibly selecting a reasonable default by looking at the GL_VENDOR and GL_RENDERER strings and taking a guess at a suitable starting value.
Quote:Original post by theMachinimator
I care where textures live because the reality is that (IIRC) the ones that are system-RAM resident start causing bus contentions when they're accessed.


That's not your problem, it's the driver manufacturers problem. They're usually better equipped to handle it than someone working in usermode as well.
Quote:I'd suggest that really what you want is a user-configurable "texture cache size" so they can tweak it for their own setup. Possibly selecting a reasonable default by looking at the GL_VENDOR and GL_RENDERER strings and taking a guess at a suitable starting value.

yes this is the best method, a lot of games have this eg doom3/crysis
low->medium->high->ultra
sized textures 256x256->512->1024->2048
Quote:Original post by theMachinimator
Our app needs to manage resources. For instance, we'd like to know how big and how many auxiliary render buffers / textures we can allocate.
I just feel the need to point out that AUX buffers are seldom HW-supported. There's no problem if you took this as an example, but if you really mean to use them... be warned!
Quote:Original post by theMachinimator
Users are free to load as many assets as they wish, so we definitely need to keep a track on how much we're using.
This is trivial. What is not is to keep track of the remaining resources. I want to recall that new[] or malloc() don't give you this info. They'll simply ret NULL when running out. Why you need a more involved approach?
Quote:Original post by theMachinimator
I care where textures live because the reality is that (IIRC) the ones that are system-RAM resident start causing bus contentions when they're accessed.
Not really. I've seen drivers hanging out completely, others scrapping all the colors. Some just refused to texImage. A few managed this well enough to not be a huge problem.

The bottom line is that people building assets should know how much RAM they have and act appropriately. What kind of 'user' are you referring to?

Previously "Krohm"

Quote:Original post by Krohm
Quote:Original post by theMachinimator
Our app needs to manage resources. For instance, we'd like to know how big and how many auxiliary render buffers / textures we can allocate.
I just feel the need to point out that AUX buffers are seldom HW-supported. There's no problem if you took this as an example, but if you really mean to use them... be warned!

Aux buffers have been hardware supported since the GF6xxx and later (and equivilent ATi cards). Support is quite widespread now.

Quote:What is not is to keep track of the remaining resources. I want to recall that new[] or malloc() don't give you this info. They'll simply ret NULL when running out. Why you need a more involved approach?

While malloc() does returns NULL when out of memory, new/new[] throw an exception (unless you explicitly use the no_throw version).
Quote:Original post by Krohm
I just feel the need to point out that AUX buffers are seldom HW-supported. There's no problem if you took this as an example, but if you really mean to use them... be warned!


Well we can detect the presence of FBOs. We have a basic fixed-function GL path in our code for the situations where our cleverer code can't operate. It's not at all clear to me how many off-screen render target textures I can allocate though. Many graphics techniques pretty much require one on their own. Plus we're implementing deferred shaders so we need 4 screen-size textures. I'd like to know if I can safely allocate such beasts ahead of time.

Quote:
This is trivial. What is not is to keep track of the remaining resources.


Quite!

Quote:
I want to recall that new[] or malloc() don't give you this info. They'll simply ret NULL when running out. Why you need a more involved approach?


Running till you can't run any more doesn't seem like a good strategy. We want to know what sort of facilities we can offer the user with their machine from the moment the code boots up.

Quote:
What kind of 'user' are you referring to?


Our product - Moviestorm - is a tool for creating machinima, virtual movies on your PC. Our users can be anyone from a teenager to a pro in the movie business. We have terabytes of assets and users are free to load as many as they like into a set. We offer effects like depth-of-field and shadows which require copious resources, and as I said, deferred rendering is there awaiting switch-on. Not all users will have access to all functionality, and we want to be able to work out what we can offer when the launcher app kicks in.

Quote:Original post by OrangyTang
Aux buffers have been hardware supported since the GF6xxx and later (and equivilent ATi cards). Support is quite widespread now.
I apologize. Thank you for correcting me, I must admit it's quite a while I don't perform extensive testing on them.
Quote:Original post by OrangyTang
While malloc() does returns NULL when out of memory, new/new[] throw an exception (unless you explicitly use the no_throw version).
The point is that they simply bail out in a way or the other, how they do that it's basically irrelevant in this context.
Quote:theMachinimator
Well we can detect the presence of FBOs. We have a basic fixed-function GL path in our code for the situations where our cleverer code can't operate. It's not at all clear to me how many off-screen render target textures I can allocate though. Many graphics techniques pretty much require one on their own.
That's just better. AUX have nothing to do with FBOs. There's no theorical limit on FBO allocation although only a few can be used, depending on MRT functionality.
Quote:Original post by OrangyTang
Running till you can't run any more doesn't seem like a good strategy. We want to know what sort of facilities we can offer the user with their machine from the moment the code boots up.

Our product - Moviestorm - is a tool for creating machinima, virtual movies on your PC. Our users can be anyone from a teenager to a pro in the movie business. We have terabytes of assets and users are free to load as many as they like into a set. We offer effects like depth-of-field and shadows which require copious resources, and as I said, deferred rendering is there awaiting switch-on. Not all users will have access to all functionality, and we want to be able to work out what we can offer when the launcher app kicks in.
Unluckly this isn't the industry standard for a reason. Even the less-technical artist will think at this. In GL there's no real way to know. You know the tale of the target hardware and the issues in managing wide audiences... you'll hardly fit the most flexible architecture with the most performing and feature-rich.

Unfortunatly I had faced a similar problem about two years ago and I had to go for D3D that time. It's measurements are not 100% accurate but you can definetly trust them to a good degree.

Previously "Krohm"

This topic is closed to new replies.

Advertisement