Framebuffer objec pixel resolution

Started by
16 comments, last by pilgrim9684 15 years, 9 months ago
I have Kevin Harris's source code (ogl_frame_buffer_object.cpp) that I obtained from a web search. I am attempting to modify it to render a grayscale image (checkerboard of gray squares) to a framebuffer object for the purpose of demonstrating that I can extract pixel resolutions greater than 8-bit (PC-video card maximum color component resolution) by using a framebuffer object instead of the graphics card for rendering an image. However, my calls to glReadPixels still only have 8-bit resolution even though I am reading from the framebuffer. I verified that my image data that I am providing my texture with glTexImage2D contains normalized floating-point data in a single-precision floating-point array and the values in that array represent a step size of 1/16384 (i.e. 14-bit resolution). My texture looks right on the display. However, my glReadPixels does not return the same 14-bit resolution thatt I proviide for a given pixel. I make a call to glBindFramebufferEXT() with the arguments GL_FRAMEBUFFER_EXT and g_frameBuffer just before I call glReadPixels. What is the key to preventing the graphics 8-bit color component limitations from affecting my glReadPixel calls. thanks, pilgrim
Advertisement
You need to create the FBO with the right precision and also readback in the same precision.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Thanks for the reply. It is good news that there is a solution. However, I have yet to find direction from any source on the web or elsewhere as to how to accomplish this. My guess is that I am missing a simple instruction. The only call I see in the code that seems to be setting up resolution is a call to glRenderbufferStorageEXT with the following arguments: GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, RENDERBUFFER_WIDTH and RENDERBUFFER_HEIGHT. However, I admit that I don't understand why that instance of that instruction is used in this set of code, setting up a depth buffer with these details and there is no detailed setup of the color component buffer where I imagine the resolution issue is rooted. If you know of code on the web that demonstrates the reading of high-resolution data from a framebuffer object of have some insight as to what specific instruction might be missing, that would sure help me a great deal.

pilgrim
GL_DEPTH_COMPONENT24 is for the depth buffer and since it is common for GPUs to support 24 bit depth, he is using that so that the FBO creation succeeds.

I think you are interested in the color buffer precision. Typically, graphics cards support GL_RGBA8 but perhaps other formats are supported as well like GL_RGBA16.
This is an example making GL_RGBA8

//RGBA8 2D texture,24 bit depth texture, 256x256glGenTextures(1, &color_tex);glBindTexture(GL_TEXTURE_2D, color_tex);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);//NULL means reserve texture memory, but texels are undefinedglTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);glGenFramebuffersEXT(1, &fb);glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);//Attach 2D texture to this FBOglFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0);glGenRenderbuffersEXT(1, &depth_rb);glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);//Attach depth buffer to FBOglFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb);//Does the GPU support current FBO configuration?GLenum status;status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);switch(status){case GL_FRAMEBUFFER_COMPLETE_EXT:cout<<"good";default:HANDLE_THE_ERROR;}glClearColor(0.0, 0.0, 0.0, 0.0);glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);glViewport(0, 0, 256, 256);glMatrixMode(GL_PROJECTION);glLoadIdentity();glOrtho(0.0, 256.0, 0.0, 256.0, -1.0, 1.0); glMatrixMode(GL_MODELVIEW);glLoadIdentity();glDisable(GL_TEXTURE_2D);glDisable(GL_BLEND);glEnable(GL_DEPTH_TEST);glColor3f(1.0, 1.0, 0.0);glBegin(GL_TRIANGLES);glVertex2f(0.0, 0.0);glVertex2f(256.0, 0.0);glVertex2f(256.0 , 256.0);glEnd();GLubyte pixels[4*4*4];glReadPixels(0, 0, 4, 4, GL_BGRA, GL_UNSIGNED_BYTE, pixels);//pixels 0, 1, 2 should be white//pixel 4 should be black



You might even want to try a floating point format like GL_RGBA32F_ARB (GL_ARB_texture_float)

http://www.opengl.org has the GL specification and
http://www.opengl.org/registry has extension info like on GL_ARB_texture_float
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Thanks for the reply. I will invest some time reviewing the code you attached to see if I can clean some insight on how to address my particular problem. In the mean time, you mentioned the crux of my issue in your response, that typical graphics cards only handle 8-bit color. I am processing the image for another purpose other than for display purposes. I need 14-bit resolution and the graphics card component resolution is the weak point in using PC graphics cards for this effort. However, FBO technology promises to decouple us from the graphics card limitations and yet my experience so far is that I cannot convince my FBO to provide 14-bit resolution when I call glReadPixels. At first glance your code example seems to be written around 8-bit color components, just what I am trying to avoid. However, I admit to not having studied it thoroughly so I will invest more time reviewing it. Thanks again for your time. If you or anyone can show me code that actually renders a 14- or 16-bit resolution texture to an FBO and can then read the back the same resolution after rendering the image, I would sure like to see that snippet of code. I have yet to find it on the web. Everything is byte oriented or jumbled up with other concepts that make the code too complex to identify the key instructions that affect my resolution issue.
pilgrim
I reviewd the code provided by v-man. Thanks for the code, but it deals with 8-bit color depth and I need to have 14-bit depth for luminance values when I read them back using glReadPixels. I still am in need of a solution at how to convince the openGL to provide better than 8-bit resolution on my FBO such that I can read back that resolution.
I guess I wasn't clear.
There is a guarantee that all graphics cards support GL_RGBA8. Notice that this is 32 bit aligned.
Some formats are extremely likely not supported directly like GL_RGB8. This would be padded to make GL_RGBA8 where A is always 255.

There is no support for 14 bit per component value since it doesn't align well however GL_RGBA16 does exist.
Modern cards support render_to_texture with practicly every format in the GL spec. nVidia has documentation on their site and I guess so does AMD.
In glTexImage2D, you need to change the internal format to whatever you like : GL_RGBA16

Quote:Original post by pilgrim9684
However, FBO technology promises to decouple us from the graphics card limitations and yet my experience so far is that I cannot convince my FBO to provide 14-bit resolution when I call glReadPixels.


It doesn't decouple.
Sometimes a format is not supported and you get something other than GL_FRAMEBUFFER_COMPLETE_EXT.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Quote:Original post by pilgrim9684
Thanks for the reply. I will invest some time reviewing the code you attached to see if I can clean some insight on how to address my particular problem. In the mean time, you mentioned the crux of my issue in your response, that typical graphics cards only handle 8-bit color. I am processing the image for another purpose other than for display purposes. I need 14-bit resolution and the graphics card component resolution is the weak point in using PC graphics cards for this effort. However, FBO technology promises to decouple us from the graphics card limitations and yet my experience so far is that I cannot convince my FBO to provide 14-bit resolution when I call glReadPixels.


You're wrong here, as FBO is just an abstraction of what the graphics card can do, and specifically, FBO is just an offscreen render target, but normally you can create FBO's with more precision than the displayable framebuffer can do.

You need to create a 16-bit floating point (half) texture and attach it to the FBO to render it, you can do it with :

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);

GL_RGBA16F_ARB is provided by the ARB_texture_float extension. You can change 16 to 32 to get a "normal" 32-bit float.

And remember that glReaPixels() also needs a format of the data, which can be GL_HALF_FLOAT_ARB or GL_FLOAT. (From the ARB_half_float_pixel extension.)
My plea for sample source code still stands. The last two replies offer tidbits of technical information (thank you for that), Huntsman in particular providing specifics for the problem I am having. However. Pixels that should have a value such as 1/16384 and 3/16384 (based on the texel value mapped to that pixel) return a value of zero using glReadPixels. It is not until the values cross the threshold of 1/256 that a nonzero value is returned.

Bottom line here is that glReadPixels still losses the resolution that was contained in the original texture. I am using HutsMan's glTexImage2D. The example provided uses GL_RGBA. I am using floating-point luminace values, so I don't think that parameter is correct for what I am doing. Do you agree? Have any of you actually read back pixels from a buffer that you have verified retained a resolution of 14 bits or greater? You may have to produce a histogram from an area of pixels read from the buffer to be sure that the values span the dynamic range of the possible bit combinations and that they are not grouped in values that reveal a lower resolution.

Once again, simple source example that has been proved to render and extract values of 14-bits resolution or greater would sure be appreciated. Pilgrim
The texture I am using contains a 16x16 checkerboard image each square made up of 16x16 texels of the same value. The texels are contained in 64x64 IEEE single-precision floatin-point grayscale values read from a file. I provide pieces of code below.
void loadTexture( void ){   FILE * checkerFile;   if( ! (checkerFile = fopen("test.bin", "rb") ) ) exit(-1);   fread( checkerImage, sizeof( GLfloat ), 64*64, checkerFile );   fclose( checkerFile );      glGenTextures( 1, &g_testTextureID );      glBindTexture(GL_TEXTURE_2D, g_testTextureID);      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);      glTexImage2D( GL_TEXTURE_2D, 0, GL_LUMINANCE, 64, 64, 0,                    GL_LUMINANCE, GL_FLOAT, checkerImage[0] );}//code for creating a dynamic texture is below:   glGenTextures( 1, &g_dynamicTextureID );   glBindTexture( GL_TEXTURE_2D, g_dynamicTextureID );   glTexImage2D(  GL_TEXTURE_2D, 0, GL_RGBA,                  RENDERBUFFER_WIDTH, RENDERBUFFER_HEIGHT,                  0, GL_LUMINANCE, GL_FLOAT, 0 );//Code for extracting the pixel value is below:        glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, g_depthRenderBuffer );        // From OpenGL Red Book, these may not be needed once working        glPixelTransferf(GL_RED_SCALE,   0.30);        glPixelTransferf(GL_BLUE_SCALE,  0.59);        glPixelTransferf(GL_GREEN_SCALE, 0.11);        glReadPixels(pixelX, pixelY, 1, 1, GL_LUMINANCE, GL_FLOAT, &value );



[Edited by - pilgrim9684 on June 27, 2008 12:22:01 PM]

This topic is closed to new replies.

Advertisement