• Advertisement
Sign in to follow this  

glTexImage3D uses double the video memory

This topic is 4435 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all, having a problem with glTexImage3D. Using a Geforce 6800 Ultra PCI-e 16x 512 MB. Every time I load a 3D texture, and calculate how much video memory it should use, it seems to be using twice as much as it should. Here's my code for loading a 3D texture:
    glBindTexture(GL_TEXTURE_3D, texture3D);

    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

    glTexImage3D(  GL_TEXTURE_3D,
                        0,
                        GL_RGBA8,
                        xiSize,
                        yiSize,
                        ziSize,          // depth
                        0,                      // border
                        GL_RGBA,                 // texture format
                        GL_UNSIGNED_BYTE,       // texture type
                        vgh_data);                // the texture

Any suggestions?

Share this post


Link to post
Share on other sites
Advertisement
How big is your texture? Whats the bbp? How much memory do you think it should use and how much is it using?

Brain21

Share this post


Link to post
Share on other sites
Hi Brain,

The size of the texture really depends. Theoreticaly, I should be able to load around ~500 MB datasets, but I've run into a limit at ~250 MB due to the fact that if I load a dataset, it uses ~500 MB.

Most of the textures are based on 12-bit CT scans. I'm loading a variety of data formats however (mostly interpreted as RAW). Could be one byte per voxel, could be 4 bytes per voxel.

The basic problem is this: I load a 256x256x128 texture using GL_RGBA8, it should be using 32 MB on the card. Instead it uses 64 MB. I'm wondering if I'm doing something stupid with the code above that's causing the problem.

Thanks

Share this post


Link to post
Share on other sites
I cannot reproduce this - it seems to work fine for me. What driver version are you running? Can you provide a reproducible test case? There is nothing wrong with the code you posted but obviously something is going wrong if your observations are correct.

Share this post


Link to post
Share on other sites
Offhand I'd guess that it's not able to cope with a non-cubic 3D texture.

I'm doing somewhat related work on a 7800GTX (256 MB), but I haven't tried to check the actual limitations since I only need one 256x256x256 at a time. Next time I get back to that work I will double check what NV is reporting for memory use, but that could be a couple weeks, so don't hold your breath. My advice right now though is to use a 256 cube instead of a half-cube and see whether your memory consumption doubles or not. If not, well it's fairly obvious what the problem is.

Share this post


Link to post
Share on other sites
Perhaps a bit of a stretch, but is it possible that [xyz]iSize are being filled correctly? Errors of this sort are all too common.

xiSize = tex.GetX();
yiSize = tex.GetY();
ziSize = tex.GetY();

Share this post


Link to post
Share on other sites
Well guysm I've taken your suggestions, and have come up with them same results.

I've checked to make sure that I am passing the correct dimensions to the call, and I am.

I've checked out using a cubic texture, and I get the same double memory usage.

Sigh...

I must be doing something wrong, or the nvidia driver is doing something wrong.

Only one thought occurs, and it may be a stupid one. If I'm using a double buffered OGL context, can it be double buffering the texture???

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Renaissanz
Well guysm I've taken your suggestions, and have come up with them same results.
I've checked to make sure that I am passing the correct dimensions to the call, and I am.
I've checked out using a cubic texture, and I get the same double memory usage.
Sigh...
I must be doing something wrong, or the nvidia driver is doing something wrong.
Only one thought occurs, and it may be a stupid one. If I'm using a double buffered OGL context, can it be double buffering the texture???


If you load a texture, then the system usually generates the mipmaps for it. Every mipmap level is half the size of the previous. In the 2d case, this adds up to a total of about 133% of the original texture.

For texture memory allocation, the rounding up to the smallest multiple of 2 that is big enough for all dimensions quite is possible, so both a 64x256 or a 128x256 texture could occupy the same memory as a 256x256 would. Merging two datasets together can be a solution. (two 256x256x128 can be stored as one 256x256x256)

Viktor

Share this post


Link to post
Share on other sites
mipmapping whilst not necessary is usually benificial QRT speed it does require an extra ~33% memory (not twice as much)

Share this post


Link to post
Share on other sites
Well, I don't really want to use mip mapping at this point. I'd be happy just understanding why my texture memory is twice what it should be.

Share this post


Link to post
Share on other sites
Now I'm even in the NVIDIA Developer Program, and they haven't given me any input as to what might be the problem!!! AARRGH!

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
You should also check the driver settings. There is an option that forces the use of mipmaps (but I don't know if that setting applies to 3d textures as well). The driver settings have cause me problems more than once.

But ideally mipmaps should only take 33% more memory.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement