Sign in to follow this  

How to customize the output to RGBA8 texture from GLSL shader.

This topic is 2046 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi everybody,

I know the title looks weird but this is the best way i could describe it in one sentence.

So, I'm trying to implement a deferred shading algorithm. There are few restrictions. I want to use standard RGBA8 textures during my passed. But however, I want to get a full control of the output data in that RGBA pixels. For example in one case I would like to use .R and .B components to store a 2-byte integer. in other case .R .G .B to use as a 3-byte float, .A component to store some bit flags, etc, etc, etc.

Please help me to get full control of the content of output pixel. How to do that?

Any help is appreciated!

Thank you.
Ruben

Share this post


Link to post
Share on other sites
[quote name='rubenhak' timestamp='1336168912' post='4937488']
Hi everybody,

I know the title looks weird but this is the best way i could describe it in one sentence.

So, I'm trying to implement a deferred shading algorithm. There are few restrictions. I want to use standard RGBA8 textures during my passed. But however, I want to get a full control of the output data in that RGBA pixels. For example in one case I would like to use .R and .B components to store a 2-byte integer. in other case .R .G .B to use as a 3-byte float, .A component to store some bit flags, etc, etc, etc.

Please help me to get full control of the content of output pixel. How to do that?

Any help is appreciated!

Thank you.
Ruben
[/quote]
Typical you pack/unpack your data into your output variables, pseudo-code example:

[CODE]
vec2 pack16Int( int input)
{
vec2 result;
result.y = floor(input/256);
result.x = input - 256*result.y;
return result;
}
int unpack16Int( vec2 packed )
{
int result = packed.x + packed.y*256;
return result;
}
[/CODE]

Share this post


Link to post
Share on other sites
Ashaman73, thanks for response!

I'm just a little bit confused. What will happen if I write .x and .y components of the packed integer to gl_FragColor.x and vec4 gl_FragColor.y ? My understanding of gl_FragColor that it is a four component floating-point vector, it is meant to store "color" information and is storing values 0.0 to 1.0.
But I want its components to store just 1 byte, and I do not want the GPU do perform any kind of precision control. Whatever I write into it, I want to read
exactly that thing from it during next pass. Please explain if I'm missing something.


[CODE]

vec2 pack16Int( int input)
{
vec2 result;
result.y = floor(input/256);
result.x = input - 256*result.y;
return result;
}
void main()
{
vec2 packedInteger = pack16Int(SOME_NUMBER);
gl_FragColor.x = packedInteger.x;
gl_FragColor.y = packedInteger.y;
}
[/CODE]

Share this post


Link to post
Share on other sites
[quote name='rubenhak' timestamp='1336435776' post='4938229']
I'm just a little bit confused. What will happen if I write .x and .y components of the packed integer to gl_FragColor.x and vec4 gl_FragColor.y ?
[/quote]
You need to normalize them first, when you know , that you want to write a 16 bit int to two 8bit color channels , do adjust your packing like this:
[CODE]
vec2 pack16Int( int input)
{
vec2 result;
result.y = floor(input/256);
result.x = input - 256*result.y;
// normalize
return result * vec2(1.0/255.0); //<< new
}
int unpack16Int( vec2 packed )
{
int result = packed.x * 255.0 + packed.y*256.0 * 255.0;
return result;
}

void main()
{
vec2 packedInteger = pack16Int(SOME_NUMBER);
gl_FragColor.x = packedInteger.x;
gl_FragColor.y = packedInteger.y;
}
[/CODE]

One thought:
Think about doing all the stuff in with atleast 16bit floats instead of 8bit ints, that makes the life a lot easier. It is a widely supported format, for example SCII is build upon 16bit float render targets and starcraft II runs on a lot of PCs.

Share this post


Link to post
Share on other sites
[quote name='Ashaman73' timestamp='1336454053' post='4938289']
You need to normalize them first, when you know , that you want to write a 16 bit int to two 8bit color channels , do adjust your packing like this:
[/quote]
Its very clear how to pack real numbers into components. I guess I'd be able to use regular bitwise operations to store some flags as well.

There is one more thing I'd need to do. What if I want to store a decimal value into 2 or 3 bytes. Cant think of a easy way of packing it without using logarithmic functions.


[quote name='Ashaman73' timestamp='1336454053' post='4938289']
One thought:
Think about doing all the stuff in with atleast 16bit floats instead of 8bit ints, that makes the life a lot easier. It is a widely supported format, for example SCII is build upon 16bit float render targets and starcraft II runs on a lot of PCs.
[/quote]
Do you mean they use 4 component 16bit float textures or 2 component ones? I'm trying to do it on mobile platform and not sure if 16bit float textures format will be available.

Share this post


Link to post
Share on other sites
[quote name='rubenhak' timestamp='1336461390' post='4938312']
Do you mean they use 4 component 16bit float textures or 2 component ones? I'm trying to do it on mobile platform and not sure if 16bit float textures format will be available.
[/quote]
They use 4x 16bit float, but on a mobile plattform...I'm not familiar with mobile plattforms, but I fear, that you should stick to 8bit values.

Share this post


Link to post
Share on other sites
One more question. I declared the texture to store a byte integer per component, but in the shader a floating vector is being used: vec4 gl_FragColor. Is it safe regarding precision. If for example I want to write value "57" and i do normalize it: gl_FragColor.r = 57.0 / 255.0. When i read the value back during next pass and denormalize it by doing ".r * 255.0" would i get exactly 57 or something close to it?

Share this post


Link to post
Share on other sites
[quote name='rubenhak' timestamp='1336502462' post='4938452']
One more question. I declared the texture to store a byte integer per component, but in the shader a floating vector is being used: vec4 gl_FragColor. Is it safe regarding precision. If for example I want to write value "57" and i do normalize it: gl_FragColor.r = 57.0 / 255.0. When i read the value back during next pass and denormalize it by doing ".r * 255.0" would i get exactly 57 or something close to it?
[/quote]
To be honest, I don't know it, but I would not trust it. I would be careful with bitwise en/decoding. You can often encode single bits in the sign of a value, or in certain value ranges (0.0 - 0.2499 = first bit etc.). But I often encountered problems when you need some kind of blending or linear texture filtering.

Share this post


Link to post
Share on other sites
Is there any technique to debug shaders? How do I know if the value I write to render target is correct? How can I check if the value i read is correct?

Share this post


Link to post
Share on other sites

This topic is 2046 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this