How to customize the output to RGBA8 texture from GLSL shader.

Started by
8 comments, last by rubenhak 11 years, 11 months ago
Hi everybody,

I know the title looks weird but this is the best way i could describe it in one sentence.

So, I'm trying to implement a deferred shading algorithm. There are few restrictions. I want to use standard RGBA8 textures during my passed. But however, I want to get a full control of the output data in that RGBA pixels. For example in one case I would like to use .R and .B components to store a 2-byte integer. in other case .R .G .B to use as a 3-byte float, .A component to store some bit flags, etc, etc, etc.

Please help me to get full control of the content of output pixel. How to do that?

Any help is appreciated!

Thank you.
Ruben
Advertisement

Hi everybody,

I know the title looks weird but this is the best way i could describe it in one sentence.

So, I'm trying to implement a deferred shading algorithm. There are few restrictions. I want to use standard RGBA8 textures during my passed. But however, I want to get a full control of the output data in that RGBA pixels. For example in one case I would like to use .R and .B components to store a 2-byte integer. in other case .R .G .B to use as a 3-byte float, .A component to store some bit flags, etc, etc, etc.

Please help me to get full control of the content of output pixel. How to do that?

Any help is appreciated!

Thank you.
Ruben

Typical you pack/unpack your data into your output variables, pseudo-code example:


vec2 pack16Int( int input)
{
vec2 result;
result.y = floor(input/256);
result.x = input - 256*result.y;
return result;
}
int unpack16Int( vec2 packed )
{
int result = packed.x + packed.y*256;
return result;
}
Ashaman73, thanks for response!

I'm just a little bit confused. What will happen if I write .x and .y components of the packed integer to gl_FragColor.x and vec4 gl_FragColor.y ? My understanding of gl_FragColor that it is a four component floating-point vector, it is meant to store "color" information and is storing values 0.0 to 1.0.
But I want its components to store just 1 byte, and I do not want the GPU do perform any kind of precision control. Whatever I write into it, I want to read
exactly that thing from it during next pass. Please explain if I'm missing something.




vec2 pack16Int( int input)
{
vec2 result;
result.y = floor(input/256);
result.x = input - 256*result.y;
return result;
}
void main()
{
vec2 packedInteger = pack16Int(SOME_NUMBER);
gl_FragColor.x = packedInteger.x;
gl_FragColor.y = packedInteger.y;
}

I'm just a little bit confused. What will happen if I write .x and .y components of the packed integer to gl_FragColor.x and vec4 gl_FragColor.y ?

You need to normalize them first, when you know , that you want to write a 16 bit int to two 8bit color channels , do adjust your packing like this:

vec2 pack16Int( int input)
{
vec2 result;
result.y = floor(input/256);
result.x = input - 256*result.y;
// normalize
return result * vec2(1.0/255.0); //<< new
}
int unpack16Int( vec2 packed )
{
int result = packed.x * 255.0 + packed.y*256.0 * 255.0;
return result;
}

void main()
{
vec2 packedInteger = pack16Int(SOME_NUMBER);
gl_FragColor.x = packedInteger.x;
gl_FragColor.y = packedInteger.y;
}


One thought:
Think about doing all the stuff in with atleast 16bit floats instead of 8bit ints, that makes the life a lot easier. It is a widely supported format, for example SCII is build upon 16bit float render targets and starcraft II runs on a lot of PCs.

You need to normalize them first, when you know , that you want to write a 16 bit int to two 8bit color channels , do adjust your packing like this:

Its very clear how to pack real numbers into components. I guess I'd be able to use regular bitwise operations to store some flags as well.

There is one more thing I'd need to do. What if I want to store a decimal value into 2 or 3 bytes. Cant think of a easy way of packing it without using logarithmic functions.



One thought:
Think about doing all the stuff in with atleast 16bit floats instead of 8bit ints, that makes the life a lot easier. It is a widely supported format, for example SCII is build upon 16bit float render targets and starcraft II runs on a lot of PCs.

Do you mean they use 4 component 16bit float textures or 2 component ones? I'm trying to do it on mobile platform and not sure if 16bit float textures format will be available.

Do you mean they use 4 component 16bit float textures or 2 component ones? I'm trying to do it on mobile platform and not sure if 16bit float textures format will be available.

They use 4x 16bit float, but on a mobile plattform...I'm not familiar with mobile plattforms, but I fear, that you should stick to 8bit values.
Yeah, i guess so...
I'd really appreciate if you could help me to pack float values as well.
One more question. I declared the texture to store a byte integer per component, but in the shader a floating vector is being used: vec4 gl_FragColor. Is it safe regarding precision. If for example I want to write value "57" and i do normalize it: gl_FragColor.r = 57.0 / 255.0. When i read the value back during next pass and denormalize it by doing ".r * 255.0" would i get exactly 57 or something close to it?

One more question. I declared the texture to store a byte integer per component, but in the shader a floating vector is being used: vec4 gl_FragColor. Is it safe regarding precision. If for example I want to write value "57" and i do normalize it: gl_FragColor.r = 57.0 / 255.0. When i read the value back during next pass and denormalize it by doing ".r * 255.0" would i get exactly 57 or something close to it?

To be honest, I don't know it, but I would not trust it. I would be careful with bitwise en/decoding. You can often encode single bits in the sign of a value, or in certain value ranges (0.0 - 0.2499 = first bit etc.). But I often encountered problems when you need some kind of blending or linear texture filtering.
Is there any technique to debug shaders? How do I know if the value I write to render target is correct? How can I check if the value i read is correct?

This topic is closed to new replies.

Advertisement