16 bit Floating point format

Started by
16 comments, last by ma_hty 15 years, 10 months ago
Hi, Recently I need to create a pre-calculated data buffer and store it in a OpenGL RGBA16F texture. I simply calculate my values, put them in an array of the delphi "single" type (similiar to C++ 32 bit float), and upload it like this:

// CubeMap
glTexImage2D(  <cubeMap direction>, 0, GL_RGBA16F_ARB,// target, level, internal format
               width, height,        // dimensions
               0, GL_RGBA, GL_FLOAT, // border, format, atype
               pixelData             // data array (1D array)
              );

// 3D texture
glTexImage3D(  GL_TEXTURE_3D, 0, GL_RGBA16F_ARB,// target, level, internal format
               width, height, depth, // dimensions
               0, GL_RGBA, GL_FLOAT, // border, format, atype
               pixelData             // data array (1D array)
              );
The textures are filled with values, and seemed to be correct at first. However, a DirectX demo doing the same thing, uses a special format for the 'pixelData' array. They made the "half" type, an unsigned short that should have the same format as the 16 bit floating point format. Since I have alot of problems with my data (seems that only 50% is correct, maybe negative values are not allowed or something), I started wondering if this is nescesary in OpenGL as well? Greetings, Rick
Advertisement
The driver does the conversion from 32 bit float to 16 bit float. If a value is out of range, then I don't know what happens. Perhaps the spec explains it (http://www.opengl.org/registry and look for GL_ARB_texture_float)

If you want to convert the data yourself and upload the 16 bit float, yes you would have to store in unsigned short and have a special library that deals with it. The OpenEXR library has some functions.
In addition to GL_ARB_texture_float, you also need GL_ARB_half_float_pixel
http://www.opengl.org/registry/specs/ARB/half_float_pixel.txt

so your function call becomes

glTexImage3D( GL_TEXTURE_3D, 0, GL_RGBA16F_ARB,// target, level, internal format
width, height, depth, // dimensions
0, GL_RGBA, GL_HALF_FLOAT_ARB, // border, format, atype
pixelData // data array (1D array)
);
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Ok, so normally I shouldn't need that since OpenGL can do this by itself. That is good news and bad news. Good thing that OpenGL can save you work, bad thing because I was hoping my error would have to do with the data format. Nevertheless, I'll give it a shot with the information you give.

Thanks!
Rick
Perhaps you have an alignment problem? I tend to get bitten by those. Check out section 7, "Watch Your Pixel Store Alignment", on this page.
Quote:Original post by Lord Crc
Perhaps you have an alignment problem? I tend to get bitten by those. Check out section 7, "Watch Your Pixel Store Alignment", on this page.


By default, the alignment is 4 bytes. As the data format used by Spek is GL_RGBA & GL_FLOAT, the memory size of a row must be multiple of 4 bytes (twice). I don't think the alignment can be a problem at all.
I didn't have time to check it yet, but thanks again for the tips.

It's hard to say if the pixeldata is wrong. I store Spherical Harmonics Basis function values into a cubeMap (actually a 3D texture with 6 slices). If I draw the values inside into a 3D mesh, I should get the funky polynomial / SH Band shapes you'll often see in SH papers to demonstrate.
http://mathworld.wolfram.com/images/eps-gif/SphericalHarmonicsReIm_1000.gif

When I draw my texture on a cube (each face 1 slice), I can see these kinda shapes indeed. I can't say if they are exactly correct, should be rotated/flipped or anything. There is one odd detail though. The polynomial shapes are both pointing into positive as negative XYZ directions. I never visualized the negative colored pixels, and I think there are present as well. But since the SH lighting working is only for ~50%, it sometimes looks if only the normals that point to the positive parts on the cubeMap are working properly.

I dunno... I just should try all the tips and implement the half format myself like the ATI demo did, just to be sure.

Greetings,
Rick
Spherical Harmonics for lighting... ...

I had worked on it long time ago. But, it still sounds like a distant old friend to me. Grab some images and post them here. Maybe I can help.
An distant old friend... not exactly how I would like to describe it :)

I can't upload pictures right now, but if you like to help, I posted a question about it a couple of days ago on the Graphics forum. It seems that all the pre calculations are properly done, but something goes wrong in the generation of the SH coefficients, and/or the final pass where the SH coefficients are used to do ambient lighting. One of the possible problems could be the cubeMap I was asking about here.

Basically the lighting is just wrong. To simplify and exclude alot of problems, I only render to 1 probe at a fixed point in my world. So if green light comes from the +Z direction, I would expect all the -Z normals to be green. But in my case ~50% of the lighting is wrong. Polygons with +Z normals are green as well for example. Or if pure white light is coming from all directions, only half of the world is actually white, the other half is black.

I compiled parts of the ATI realtime Global Illumination demo to see if my pre-calculated SH data matches with the ATI code, and it does. I can't figure out why its not working, therefore maybe the data format of my texture could be the problem. Anyway, check the thread on the graphics forum (its not on the first page anymore) if you like to read the complete story + some code.

Thanks for the interest!
Rick
Quote:Original post by spek
... To simplify and exclude alot of problems, I only render to 1 probe at a fixed point in my world. ...


Is this referring to the code fragment you posted in your thread in graphics forum? If that so, you are simplifying the algorithm to ambient lighting only. The whole scene will be (and should be) of the same color.
Quote:
Is this referring to the code fragment you posted in your thread in graphics forum? If that so, you are simplifying the algorithm to ambient lighting only. The whole scene will be (and should be) of the same color.


Yep, its used for indirect lighting. I render the scene with direct lighting to a cubeMap at alot of fixed points in the world. This (tiny) cubeMap contains the bounced incoming light for all directions on that point of space.


To make is more easy, I faked the indirect color so far by just using a white pixels for every direction (a completely white cubeMap == white light coming from all directions). However, only 50% of the scene is actually white, the other opposed normals are black (with a smooth transition between black and white, depending on the normal). Light information is missing or calculated wrong for ~50% of the normals.

Here's the ATI demo (video, source + paper) what I'm talking about:

http://ati.amd.com/developer/SDK/AMD_SDK_Samples_May2007/D3D_10/Global_Illumination.zip

Thanks for helping,
Rick

This topic is closed to new replies.

Advertisement