Jump to content
  • Advertisement
Sign in to follow this  
rubenhak

Floating Point Texture. How to?

This topic is 2630 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Everybody,

I just want to learn how to manipulate with texture in GLSL shaders and how to get full control of its components.
For example, I want to create a 32 bit-per-pixel texture and
  1. store 32-bit float value per pixel
  2. store 24-bit float + 8-bit byte value per pixel
  3. store 16-bit float + two 8-bit byte values per pixes


I'm not sure how to specify texture format when creating a texture, and also how to write a value from the shader?
I assume that reading the value back would be similar as writing it.

Any help is appreciated.

Thanks,
Ruben

Share this post


Link to post
Share on other sites
Advertisement
You create 2 textures. One of them contains your orginal data that you read from your shader and output the results to your second texture, which is your render target (render to texture).

According to http://www.opengl.org/wiki/Textures
the 3rd parameter for glTexImage2D is the internal format. According to the GL 3.0 specification, you can create a floating point format (GL_RGBA32F).

store 24-bit float + 8-bit byte value per pixel[/quote]
There is no such thing as a 24 bit float.

16 bit float exist. Check out the GL 3.0 specification.

Share this post


Link to post
Share on other sites
The GL_RGBA32F would be an enormous texture which would be an overkill. I understand that there is no native 24 bit float format, but want to check with you guys what would be the best way to encode a float into a 24 bit, basically get something that would have better precision than 16 bit, but less than 32 bits :)

Lets say I choose a RGBA8 format, what would be the best way to encode floats into .rgb channels? Can use use it just like a binary data placeholder?
I'm just afraid that GPU will start clamping and manipulating with those .r .g .b channels internally and it will end up corrupting my data.

You create 2 textures. One of them contains your orginal data that you read from your shader and output the results to your second texture, which is your render target (render to texture).

According to http://www.opengl.org/wiki/Textures
the 3rd parameter for glTexImage2D is the internal format. According to the GL 3.0 specification, you can create a floating point format (GL_RGBA32F).

store 24-bit float + 8-bit byte value per pixel

There is no such thing as a 24 bit float.

16 bit float exist. Check out the GL 3.0 specification.
[/quote]

Share this post


Link to post
Share on other sites
Yes, that exactly what I'm trying to achieve. How do they encode numbers into those channels?

I think what hes getting at is they will split the G-Buffer into an RGBA where the RGB portion is depth and the A is the stencil and then the next portion is RG = x normal BA = y normal. as here: http://www.guerrilla...2_rsx_dev07.pdf

Share this post


Link to post
Share on other sites
For RGBA8, if I wanted to write a float value to it, the trick was to multiply

out.x = float;
out.y = float * 256
out.z = float * 2^16
out.w = float * 2^32

I think.
And it only worked for values from 0.0 to 1.0.

Share this post


Link to post
Share on other sites
This looks a bit clumsy. I do not have any confidence that using reversed calculation I can decode original float value.
Is there a better, bit-wise manner of doing similar thing, and also making GPU not to alter values set to .x .y .z .w components?

For RGBA8, if I wanted to write a float value to it, the trick was to multiply

out.x = float;
out.y = float * 256
out.z = float * 2^16
out.w = float * 2^32

I think.
And it only worked for values from 0.0 to 1.0.

Share this post


Link to post
Share on other sites
Not sure what version of GLSL you are targeting, but 4.10 introduces a number of appropriate packing and unpacking functions. 4.20 goes a bit further by introducing packing and unpacking of 16bit half floats to 32bit uints (here and here).

By using GLSL integer bit manipulation functions you can then shuffle the data around as needed.

Share this post


Link to post
Share on other sites
This functionality looks interesting. Is it also supported on OpenGL ES 2.0? How about bit-wise operations?

Not sure what version of GLSL you are targeting, but 4.10 introduces a number of appropriate packing and unpacking functions. 4.20 goes a bit further by introducing packing and unpacking of 16bit half floats to 32bit uints (here and here).

By using GLSL integer bit manipulation functions you can then shuffle the data around as needed.

Share this post


Link to post
Share on other sites

This functionality looks interesting. Is it also supported on OpenGL ES 2.0? How about bit-wise operations?

None of these are supported on ES.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!