Sign in to follow this  

Framebuffer objec pixel resolution

This topic is 3456 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have Kevin Harris's source code (ogl_frame_buffer_object.cpp) that I obtained from a web search. I am attempting to modify it to render a grayscale image (checkerboard of gray squares) to a framebuffer object for the purpose of demonstrating that I can extract pixel resolutions greater than 8-bit (PC-video card maximum color component resolution) by using a framebuffer object instead of the graphics card for rendering an image. However, my calls to glReadPixels still only have 8-bit resolution even though I am reading from the framebuffer. I verified that my image data that I am providing my texture with glTexImage2D contains normalized floating-point data in a single-precision floating-point array and the values in that array represent a step size of 1/16384 (i.e. 14-bit resolution). My texture looks right on the display. However, my glReadPixels does not return the same 14-bit resolution thatt I proviide for a given pixel. I make a call to glBindFramebufferEXT() with the arguments GL_FRAMEBUFFER_EXT and g_frameBuffer just before I call glReadPixels. What is the key to preventing the graphics 8-bit color component limitations from affecting my glReadPixel calls. thanks, pilgrim

Share this post


Link to post
Share on other sites
Thanks for the reply. It is good news that there is a solution. However, I have yet to find direction from any source on the web or elsewhere as to how to accomplish this. My guess is that I am missing a simple instruction. The only call I see in the code that seems to be setting up resolution is a call to glRenderbufferStorageEXT with the following arguments: GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, RENDERBUFFER_WIDTH and RENDERBUFFER_HEIGHT. However, I admit that I don't understand why that instance of that instruction is used in this set of code, setting up a depth buffer with these details and there is no detailed setup of the color component buffer where I imagine the resolution issue is rooted. If you know of code on the web that demonstrates the reading of high-resolution data from a framebuffer object of have some insight as to what specific instruction might be missing, that would sure help me a great deal.

pilgrim

Share this post


Link to post
Share on other sites
GL_DEPTH_COMPONENT24 is for the depth buffer and since it is common for GPUs to support 24 bit depth, he is using that so that the FBO creation succeeds.

I think you are interested in the color buffer precision. Typically, graphics cards support GL_RGBA8 but perhaps other formats are supported as well like GL_RGBA16.
This is an example making GL_RGBA8


//RGBA8 2D texture,24 bit depth texture, 256x256

glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//NULL means reserve texture memory, but texels are undefined
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);

glGenFramebuffersEXT(1, &fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);

//Attach 2D texture to this FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0);

glGenRenderbuffersEXT(1, &depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
//Attach depth buffer to FBO
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb);

//Does the GPU support current FBO configuration?
GLenum status;
status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
switch(status)
{
case GL_FRAMEBUFFER_COMPLETE_EXT:
cout<<"good";
default:
HANDLE_THE_ERROR;
}

glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glViewport(0, 0, 256, 256);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 256.0, 0.0, 256.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);

glColor3f(1.0, 1.0, 0.0);
glBegin(GL_TRIANGLES);
glVertex2f(0.0, 0.0);
glVertex2f(256.0, 0.0);
glVertex2f(256.0 , 256.0);
glEnd();

GLubyte pixels[4*4*4];
glReadPixels(0, 0, 4, 4, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
//pixels 0, 1, 2 should be white
//pixel 4 should be black




You might even want to try a floating point format like GL_RGBA32F_ARB (GL_ARB_texture_float)

http://www.opengl.org has the GL specification and
http://www.opengl.org/registry has extension info like on GL_ARB_texture_float

Share this post


Link to post
Share on other sites
Thanks for the reply. I will invest some time reviewing the code you attached to see if I can clean some insight on how to address my particular problem. In the mean time, you mentioned the crux of my issue in your response, that typical graphics cards only handle 8-bit color. I am processing the image for another purpose other than for display purposes. I need 14-bit resolution and the graphics card component resolution is the weak point in using PC graphics cards for this effort. However, FBO technology promises to decouple us from the graphics card limitations and yet my experience so far is that I cannot convince my FBO to provide 14-bit resolution when I call glReadPixels. At first glance your code example seems to be written around 8-bit color components, just what I am trying to avoid. However, I admit to not having studied it thoroughly so I will invest more time reviewing it. Thanks again for your time. If you or anyone can show me code that actually renders a 14- or 16-bit resolution texture to an FBO and can then read the back the same resolution after rendering the image, I would sure like to see that snippet of code. I have yet to find it on the web. Everything is byte oriented or jumbled up with other concepts that make the code too complex to identify the key instructions that affect my resolution issue.
pilgrim

Share this post


Link to post
Share on other sites
I reviewd the code provided by v-man. Thanks for the code, but it deals with 8-bit color depth and I need to have 14-bit depth for luminance values when I read them back using glReadPixels. I still am in need of a solution at how to convince the openGL to provide better than 8-bit resolution on my FBO such that I can read back that resolution.

Share this post


Link to post
Share on other sites
I guess I wasn't clear.
There is a guarantee that all graphics cards support GL_RGBA8. Notice that this is 32 bit aligned.
Some formats are extremely likely not supported directly like GL_RGB8. This would be padded to make GL_RGBA8 where A is always 255.

There is no support for 14 bit per component value since it doesn't align well however GL_RGBA16 does exist.
Modern cards support render_to_texture with practicly every format in the GL spec. nVidia has documentation on their site and I guess so does AMD.
In glTexImage2D, you need to change the internal format to whatever you like : GL_RGBA16

Quote:
Original post by pilgrim9684
However, FBO technology promises to decouple us from the graphics card limitations and yet my experience so far is that I cannot convince my FBO to provide 14-bit resolution when I call glReadPixels.


It doesn't decouple.
Sometimes a format is not supported and you get something other than GL_FRAMEBUFFER_COMPLETE_EXT.

Share this post


Link to post
Share on other sites
Quote:
Original post by pilgrim9684
Thanks for the reply. I will invest some time reviewing the code you attached to see if I can clean some insight on how to address my particular problem. In the mean time, you mentioned the crux of my issue in your response, that typical graphics cards only handle 8-bit color. I am processing the image for another purpose other than for display purposes. I need 14-bit resolution and the graphics card component resolution is the weak point in using PC graphics cards for this effort. However, FBO technology promises to decouple us from the graphics card limitations and yet my experience so far is that I cannot convince my FBO to provide 14-bit resolution when I call glReadPixels.


You're wrong here, as FBO is just an abstraction of what the graphics card can do, and specifically, FBO is just an offscreen render target, but normally you can create FBO's with more precision than the displayable framebuffer can do.

You need to create a 16-bit floating point (half) texture and attach it to the FBO to render it, you can do it with :

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);

GL_RGBA16F_ARB is provided by the ARB_texture_float extension. You can change 16 to 32 to get a "normal" 32-bit float.

And remember that glReaPixels() also needs a format of the data, which can be GL_HALF_FLOAT_ARB or GL_FLOAT. (From the ARB_half_float_pixel extension.)

Share this post


Link to post
Share on other sites
My plea for sample source code still stands. The last two replies offer tidbits of technical information (thank you for that), Huntsman in particular providing specifics for the problem I am having. However. Pixels that should have a value such as 1/16384 and 3/16384 (based on the texel value mapped to that pixel) return a value of zero using glReadPixels. It is not until the values cross the threshold of 1/256 that a nonzero value is returned.

Bottom line here is that glReadPixels still losses the resolution that was contained in the original texture. I am using HutsMan's glTexImage2D. The example provided uses GL_RGBA. I am using floating-point luminace values, so I don't think that parameter is correct for what I am doing. Do you agree? Have any of you actually read back pixels from a buffer that you have verified retained a resolution of 14 bits or greater? You may have to produce a histogram from an area of pixels read from the buffer to be sure that the values span the dynamic range of the possible bit combinations and that they are not grouped in values that reveal a lower resolution.

Once again, simple source example that has been proved to render and extract values of 14-bits resolution or greater would sure be appreciated. Pilgrim

Share this post


Link to post
Share on other sites
The texture I am using contains a 16x16 checkerboard image each square made up of 16x16 texels of the same value. The texels are contained in 64x64 IEEE single-precision floatin-point grayscale values read from a file. I provide pieces of code below.

void loadTexture( void )
{
FILE * checkerFile;
if( ! (checkerFile = fopen("test.bin", "rb") ) ) exit(-1);
fread( checkerImage, sizeof( GLfloat ), 64*64, checkerFile );
fclose( checkerFile );
glGenTextures( 1, &g_testTextureID );
glBindTexture(GL_TEXTURE_2D, g_testTextureID);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D( GL_TEXTURE_2D, 0, GL_LUMINANCE, 64, 64, 0,
GL_LUMINANCE, GL_FLOAT, checkerImage[0] );
}

//code for creating a dynamic texture is below:

glGenTextures( 1, &g_dynamicTextureID );
glBindTexture( GL_TEXTURE_2D, g_dynamicTextureID );

glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA,
RENDERBUFFER_WIDTH, RENDERBUFFER_HEIGHT,
0, GL_LUMINANCE, GL_FLOAT, 0 );


//Code for extracting the pixel value is below:

glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, g_depthRenderBuffer );

// From OpenGL Red Book, these may not be needed once working
glPixelTransferf(GL_RED_SCALE, 0.30);
glPixelTransferf(GL_BLUE_SCALE, 0.59);
glPixelTransferf(GL_GREEN_SCALE, 0.11);


glReadPixels(pixelX, pixelY, 1, 1, GL_LUMINANCE, GL_FLOAT, &value );




[Edited by - pilgrim9684 on June 27, 2008 12:22:01 PM]

Share this post


Link to post
Share on other sites
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA,
RENDERBUFFER_WIDTH, RENDERBUFFER_HEIGHT,
0, GL_LUMINANCE, GL_FLOAT, 0 );


The 3rd parameter is the precision of the internal format. You set it to GL_RGBA which means you don't care about the precision and you just want something with red, green, blue and alpha.
I suggest that you be more specific. For example : GL_RGBA32F_ARB
That means RGBA with each component 32 bit floating point.

Share this post


Link to post
Share on other sites
V-man, That parameter value causes an access violation when I substitute the value identifier you provided for the identifier I was using.The exception occurs when I attempt to read the pixel value.

What else must change to be compatible with this single change you identified?

I made this change also ...


glRenderbufferStorageEXT( GL_RENDERBUFFER_EXT, GL_RGBA32F_ARB, RENDERBUFFER_WIDTH, RENDERBUFFER_HEIGHT );




[Edited by - pilgrim9684 on June 27, 2008 1:06:12 PM]

Share this post


Link to post
Share on other sites
You get an access violation if it writes to a invalid memory location. I can't say much more than that from the information you provided.
I don't understand why you create those texture either (g_testTextureID and g_dynamicTextureID)

Share this post


Link to post
Share on other sites
I didn't sift all through this posting, but

glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA,
RENDERBUFFER_WIDTH, RENDERBUFFER_HEIGHT,
0, GL_LUMINANCE, GL_FLOAT, 0 );

is so not making sense. You choose an internal format of GL_RGBA and then ask for a single channel in the pixel data you are supplying, with GL_LUMINANCE.

so try this

glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8,
RENDERBUFFER_WIDTH, RENDERBUFFER_HEIGHT,
0, GL_RGBA, GL_FLOAT, 0 );

and when you read the pixels back you just disregard the other channels if you want...

As of now I am confused at what you are trying to achieve. If you want good resolution just use as V-man suggested or try the floating point texture formats that were suggest already.

Share this post


Link to post
Share on other sites
MARS_999 response below affects the program in that applying the suggested change results in the program no longer crashing. However, the resolution read back using glReadPixels is still no beter than 1/256 in resolution when I provided 1/16383 resolution data in the original texture.

Share this post


Link to post
Share on other sites
That is a driver bug and you should report to whatever company makes your graphics card. Make a small test case and if you have nvidia, devrel@nvidia.com
Tell them your OS, graphics card, driver version, etc.

Share this post


Link to post
Share on other sites
I was hoping for a magic incantation or something, perhaps I was treading water someone was already familiar with and knew just what to do to get back to shore, but you are probably right, bit by a driver bug again. Thanks for your time in reading and considering what I was fighting with.

Share this post


Link to post
Share on other sites

This topic is 3456 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this