Jump to content
  • Advertisement
Sign in to follow this  

load a texture, draw it, then read back data - becomes corrupted

This topic is 4077 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

this situation isa no brainer but I can't get it to work: All I do is create a texture, render it, then read back the data and look at it. I'm expecting the data back to be exactly the same as when i rendered it - but it's not it's all corrupted. create texture

GLuint myTex;
glGenTextures(1, &myTex);
GLuint size =itexSizeW*itexSizeW*4*sizeof(float);
float *textureData = (float*)malloc(size);
for (int i=0; i<size; i++)
	textureData= 0.1f;  //initialise to 0.1f

glBindTexture(iTextureType,myTex);  //can also have rectangle target
glTexParameteri(iTextureType, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(iTextureType, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(iTextureType, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(iTextureType, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(iTextureType, 0, GL_RGBA8, itexSizeW, itexSizeH, 0, GL_RGBA, GL_FLOAT, textureData);

now set viewport etc and render texture, view port is meant to be the same size as texture as this will be the fore runner of a shader I'm writing.
glViewport(0, 0, itexSizeW, itexSizeH);
gluOrtho2D(0.0, itexSizeW, 0.0, itexSizeH);

// make quad filled to hit every pixel/texel
// and render quad for RECT
glBindTexture(iTextureType, myTex);
    glTexCoord2f(0.0f, 0.0f); 
    glVertex2f(0.0f, 0.0f);
    glTexCoord2f(1.0f, 0.0f); 
    glVertex2f((float)itexSizeW, 0.0f);
    glTexCoord2f(1.0f, 1.0f);
    glVertex2f((float)itexSizeW, (float)itexSizeW);
    glTexCoord2f(0.0f, 1.0f); 
    glVertex2f(0.0f, (float)itexSizeW);

now copy it back from the frame bufer and read it
glBindTexture(iTextureType, myTex);
glCopyTexSubImage2D(iTextureType, 0, 0, 0, 0, 0, itexSizeW, itexSizeH);
glGetTexImage(iTextureType, 0, GL_RGBA, GL_FLOAT,  textureData );

float data;
for (unsigned int i=0; i<itexSizeW; i++)
	data=textureData;  //corrupted here

Share this post

Link to post
Share on other sites

I am fairly new to OpenGL, but I just ran across this problem yesterday and think I know what is wrong. The problem is with your statement:

glTexImage2D(iTextureType, 0, GL_RGBA8, itexSizeW, itexSizeH, 0, GL_RGBA, GL_FLOAT, textureData);

You have the internal format as GL_RGBA8, which stores all of the channels as unsigned bytes. Even though you are reading in floats, it converts them to unsigned bytes. One thing I am still unsure of is how it converts the floats. If you only need 8-bit precision per channel, I would convert it first in CPU memory before writing it to the texture so you can be sure it is converting it correctly.

I have been unable to find a 32-bit representation for GL_TEXTURE_2D. I had to move over to GL_TEXTURE_RECTANGLE_ARB with the internal format of GL_FLOAT_RGBA32_NV. I get an invalid operation error if I try using GL_FLOAT_RGBA32_NV with GL_TEXTURE_2D. Unfortunately, GL_TEXTURE_RECTANGLE_ARB is causing me problems, partly because it uses non-normalized coordinates.

Hope that helps.

[Edited by - Kraig on May 23, 2007 12:45:20 PM]

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!