Jump to content
  • Advertisement
Sign in to follow this  
madmax46

OpenGL Opengl texture memory management question

This topic is 2999 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I posted this thread on the beginners forum (http://www.gamedev.net/community/forums/topic.asp?topic_id=581198) and I was wondering if maybe anyone on the opengl forum might know. Here is my question copied straight from the other forum:

I was wondering what will happen if I continuously load textures to the video card without disposing of unused ones. Will the video card start swapping the old memory out and be able to swap back in when the old textures are referenced again or do I need to write code to do memory management myself? Is there a performance boost in doing it either way?

Thanks,

Max

Share this post


Link to post
Share on other sites
Advertisement
Hell max,

Texture loads are a driver decision. What does that mean? OpenGL only specifies behavior not how driver will work with memory storage for texture objects. Thus, answering your question, whenever you load a texture the driver decides where it will be stored, that might be VRAM or CPU RAM, or even disc memory, that is up to the driver, and that behavior might happen in your first texture (AKA driver might decide to use RAM for your very first texture for his own reasons).

What the driver usually do? Well really depends on how is it programmed, but mainly it will try to upload data as soon as his sync allows him too and his storage analysis confirms there is enough space. If you do not release texture memory he might be forced to use CPU RAM in order to fullfil your request, or even use slower memory. So it is always a good behavior to unload textures you are not using.

Again (just to make sure) uploading a texture does not mean it will directly go to VRAM, it means it will most likely go there, if the driver decides its his best option, but no one guarantees you it will go there.

Hopefully it helps,
Cheers.

Share this post


Link to post
Share on other sites
To piggy back on your thread:

I wonder how OpenGL deals with a single channel texture.
If I have a texture that's nothing more than a grayscale texture, and I create a texture as GL_ALPHA; will it create a RGBA anyway and just fill the A ?

Reading the OpenGL specs; that's the impression I get. But than again; seems like a very huge waste on the part of OpenGL to waste so much memory.

Share this post


Link to post
Share on other sites
For single channel textures, I've always created them using GL_LUMINANCE,GL_LUMINANCE8, and GL_UNSIGNED_BYTE. I Would think they would always manage the memory correctly in that case.

Max

Share this post


Link to post
Share on other sites
According to the OpenGL Doc :

http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml

GL_LUMINANCE
Each element is a single luminance value.
The GL converts it to floating point,
then assembles it into an RGBA element by replicating the luminance value
three times for red, green, and blue and attaching 1 for alpha.
Each component is then multiplied by the signed scale factor GL_c_SCALE,
added to the signed bias GL_c_BIAS,
and clamped to the range [0,1]
(see glPixelTransfer).

Share this post


Link to post
Share on other sites
I thought that before, but I think it is driver dependent because I have a fragment shader and the only values being set is the textureCoord.r and .a, .r has my value and .a has 1. .g and .b are both empty. I'm not sure what is happening there.

Max

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!