Opengl texture memory management question

Started by
4 comments, last by madmax46 13 years, 7 months ago
I posted this thread on the beginners forum (http://www.gamedev.net/community/forums/topic.asp?topic_id=581198) and I was wondering if maybe anyone on the opengl forum might know. Here is my question copied straight from the other forum:

I was wondering what will happen if I continuously load textures to the video card without disposing of unused ones. Will the video card start swapping the old memory out and be able to swap back in when the old textures are referenced again or do I need to write code to do memory management myself? Is there a performance boost in doing it either way?

Thanks,

Max
Advertisement
Hell max,

Texture loads are a driver decision. What does that mean? OpenGL only specifies behavior not how driver will work with memory storage for texture objects. Thus, answering your question, whenever you load a texture the driver decides where it will be stored, that might be VRAM or CPU RAM, or even disc memory, that is up to the driver, and that behavior might happen in your first texture (AKA driver might decide to use RAM for your very first texture for his own reasons).

What the driver usually do? Well really depends on how is it programmed, but mainly it will try to upload data as soon as his sync allows him too and his storage analysis confirms there is enough space. If you do not release texture memory he might be forced to use CPU RAM in order to fullfil your request, or even use slower memory. So it is always a good behavior to unload textures you are not using.

Again (just to make sure) uploading a texture does not mean it will directly go to VRAM, it means it will most likely go there, if the driver decides its his best option, but no one guarantees you it will go there.

Hopefully it helps,
Cheers.
To piggy back on your thread:

I wonder how OpenGL deals with a single channel texture.
If I have a texture that's nothing more than a grayscale texture, and I create a texture as GL_ALPHA; will it create a RGBA anyway and just fill the A ?

Reading the OpenGL specs; that's the impression I get. But than again; seems like a very huge waste on the part of OpenGL to waste so much memory.
For single channel textures, I've always created them using GL_LUMINANCE,GL_LUMINANCE8, and GL_UNSIGNED_BYTE. I Would think they would always manage the memory correctly in that case.

Max
According to the OpenGL Doc :

http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml

GL_LUMINANCE
Each element is a single luminance value.
The GL converts it to floating point,
then assembles it into an RGBA element by replicating the luminance value
three times for red, green, and blue and attaching 1 for alpha.
Each component is then multiplied by the signed scale factor GL_c_SCALE,
added to the signed bias GL_c_BIAS,
and clamped to the range [0,1]
(see glPixelTransfer).
I thought that before, but I think it is driver dependent because I have a fragment shader and the only values being set is the textureCoord.r and .a, .r has my value and .a has 1. .g and .b are both empty. I'm not sure what is happening there.

Max

This topic is closed to new replies.

Advertisement