Sign in to follow this  
ade-the-heat

load a texture, draw it, then read back data - becomes corrupted

Recommended Posts

this situation isa no brainer but I can't get it to work: All I do is create a texture, render it, then read back the data and look at it. I'm expecting the data back to be exactly the same as when i rendered it - but it's not it's all corrupted. create texture
itexSizeW=8;
itexSizeH=8;
iTextureType=GL_TEXTURE_2D;
glEnable(iTextureType);

GLuint myTex;
glGenTextures(1, &myTex);
GLuint size =itexSizeW*itexSizeW*4*sizeof(float);
float *textureData = (float*)malloc(size);
for (int i=0; i<size; i++)
{
	textureData[i]= 0.1f;  //initialise to 0.1f
}

glBindTexture(iTextureType,myTex);  //can also have rectangle target
glTexParameteri(iTextureType, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(iTextureType, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(iTextureType, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(iTextureType, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexImage2D(iTextureType, 0, GL_RGBA8, itexSizeW, itexSizeH, 0, GL_RGBA, GL_FLOAT, textureData);

now set viewport etc and render texture, view port is meant to be the same size as texture as this will be the fore runner of a shader I'm writing.
glViewport(0, 0, itexSizeW, itexSizeH);
glMatrixMode(GL_PROJECTION);    
glLoadIdentity();               
gluOrtho2D(0.0, itexSizeW, 0.0, itexSizeH);
     
glMatrixMode(GL_MODELVIEW);     
glLoadIdentity(); 

// make quad filled to hit every pixel/texel
glPolygonMode(GL_FRONT,GL_FILL);
// and render quad for RECT
glBindTexture(iTextureType, myTex);
glBegin(GL_QUADS);
    glTexCoord2f(0.0f, 0.0f); 
    glVertex2f(0.0f, 0.0f);
    glTexCoord2f(1.0f, 0.0f); 
    glVertex2f((float)itexSizeW, 0.0f);
    glTexCoord2f(1.0f, 1.0f);
    glVertex2f((float)itexSizeW, (float)itexSizeW);
    glTexCoord2f(0.0f, 1.0f); 
    glVertex2f(0.0f, (float)itexSizeW);
glEnd();

now copy it back from the frame bufer and read it
glBindTexture(iTextureType, myTex);
glCopyTexSubImage2D(iTextureType, 0, 0, 0, 0, 0, itexSizeW, itexSizeH);
glGetTexImage(iTextureType, 0, GL_RGBA, GL_FLOAT,  textureData );

float data;
for (unsigned int i=0; i<itexSizeW; i++)
{
	data=textureData[i];  //corrupted here
}


Share this post


Link to post
Share on other sites
Hi,

I am fairly new to OpenGL, but I just ran across this problem yesterday and think I know what is wrong. The problem is with your statement:

glTexImage2D(iTextureType, 0, GL_RGBA8, itexSizeW, itexSizeH, 0, GL_RGBA, GL_FLOAT, textureData);

You have the internal format as GL_RGBA8, which stores all of the channels as unsigned bytes. Even though you are reading in floats, it converts them to unsigned bytes. One thing I am still unsure of is how it converts the floats. If you only need 8-bit precision per channel, I would convert it first in CPU memory before writing it to the texture so you can be sure it is converting it correctly.

I have been unable to find a 32-bit representation for GL_TEXTURE_2D. I had to move over to GL_TEXTURE_RECTANGLE_ARB with the internal format of GL_FLOAT_RGBA32_NV. I get an invalid operation error if I try using GL_FLOAT_RGBA32_NV with GL_TEXTURE_2D. Unfortunately, GL_TEXTURE_RECTANGLE_ARB is causing me problems, partly because it uses non-normalized coordinates.

Hope that helps.

[Edited by - Kraig on May 23, 2007 12:45:20 PM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this