Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


How to customize the output to RGBA8 texture from GLSL shader.


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
9 replies to this topic

#1 rubenhak   Members   -  Reputation: 154

Like
0Likes
Like

Posted 04 May 2012 - 04:01 PM

Hi everybody,

I know the title looks weird but this is the best way i could describe it in one sentence.

So, I'm trying to implement a deferred shading algorithm. There are few restrictions. I want to use standard RGBA8 textures during my passed. But however, I want to get a full control of the output data in that RGBA pixels. For example in one case I would like to use .R and .B components to store a 2-byte integer. in other case .R .G .B to use as a 3-byte float, .A component to store some bit flags, etc, etc, etc.

Please help me to get full control of the content of output pixel. How to do that?

Any help is appreciated!

Thank you.
Ruben

Sponsor:

#2 Ashaman73   Crossbones+   -  Reputation: 7789

Like
1Likes
Like

Posted 06 May 2012 - 11:55 PM

Hi everybody,

I know the title looks weird but this is the best way i could describe it in one sentence.

So, I'm trying to implement a deferred shading algorithm. There are few restrictions. I want to use standard RGBA8 textures during my passed. But however, I want to get a full control of the output data in that RGBA pixels. For example in one case I would like to use .R and .B components to store a 2-byte integer. in other case .R .G .B to use as a 3-byte float, .A component to store some bit flags, etc, etc, etc.

Please help me to get full control of the content of output pixel. How to do that?

Any help is appreciated!

Thank you.
Ruben

Typical you pack/unpack your data into your output variables, pseudo-code example:

vec2 pack16Int( int input)
{
  vec2 result;
  result.y = floor(input/256);
  result.x = input - 256*result.y;
  return result;
}
int unpack16Int( vec2 packed )
{
  int result = packed.x + packed.y*256;
  return result;
}


#3 rubenhak   Members   -  Reputation: 154

Like
0Likes
Like

Posted 07 May 2012 - 06:09 PM

Ashaman73, thanks for response!

I'm just a little bit confused. What will happen if I write .x and .y components of the packed integer to gl_FragColor.x and vec4 gl_FragColor.y ? My understanding of gl_FragColor that it is a four component floating-point vector, it is meant to store "color" information and is storing values 0.0 to 1.0.
But I want its components to store just 1 byte, and I do not want the GPU do perform any kind of precision control. Whatever I write into it, I want to read
exactly that thing from it during next pass. Please explain if I'm missing something.



vec2 pack16Int( int input)
{
  vec2 result;
  result.y = floor(input/256);
  result.x = input - 256*result.y;
  return result;
}
void main()
{
     vec2 packedInteger = pack16Int(SOME_NUMBER);
     gl_FragColor.x = packedInteger.x;
     gl_FragColor.y = packedInteger.y;
}


#4 Ashaman73   Crossbones+   -  Reputation: 7789

Like
1Likes
Like

Posted 07 May 2012 - 11:14 PM

I'm just a little bit confused. What will happen if I write .x and .y components of the packed integer to gl_FragColor.x and vec4 gl_FragColor.y ?

You need to normalize them first, when you know , that you want to write a 16 bit int to two 8bit color channels , do adjust your packing like this:
vec2 pack16Int( int input)
{
  vec2 result;
  result.y = floor(input/256);
  result.x = input - 256*result.y;
  // normalize 
  return result * vec2(1.0/255.0); //<< new
}
int unpack16Int( vec2 packed )
{
  int result = packed.x * 255.0 + packed.y*256.0 * 255.0;
  return result;
}

void main()
{
	 vec2 packedInteger = pack16Int(SOME_NUMBER);
	 gl_FragColor.x = packedInteger.x;
	 gl_FragColor.y = packedInteger.y;
}

One thought:
Think about doing all the stuff in with atleast 16bit floats instead of 8bit ints, that makes the life a lot easier. It is a widely supported format, for example SCII is build upon 16bit float render targets and starcraft II runs on a lot of PCs.

#5 rubenhak   Members   -  Reputation: 154

Like
0Likes
Like

Posted 08 May 2012 - 01:16 AM

You need to normalize them first, when you know , that you want to write a 16 bit int to two 8bit color channels , do adjust your packing like this:

Its very clear how to pack real numbers into components. I guess I'd be able to use regular bitwise operations to store some flags as well.

There is one more thing I'd need to do. What if I want to store a decimal value into 2 or 3 bytes. Cant think of a easy way of packing it without using logarithmic functions.


One thought:
Think about doing all the stuff in with atleast 16bit floats instead of 8bit ints, that makes the life a lot easier. It is a widely supported format, for example SCII is build upon 16bit float render targets and starcraft II runs on a lot of PCs.

Do you mean they use 4 component 16bit float textures or 2 component ones? I'm trying to do it on mobile platform and not sure if 16bit float textures format will be available.

#6 Ashaman73   Crossbones+   -  Reputation: 7789

Like
0Likes
Like

Posted 08 May 2012 - 02:31 AM

Do you mean they use 4 component 16bit float textures or 2 component ones? I'm trying to do it on mobile platform and not sure if 16bit float textures format will be available.

They use 4x 16bit float, but on a mobile plattform...I'm not familiar with mobile plattforms, but I fear, that you should stick to 8bit values.

#7 rubenhak   Members   -  Reputation: 154

Like
0Likes
Like

Posted 08 May 2012 - 11:03 AM

Yeah, i guess so...
I'd really appreciate if you could help me to pack float values as well.

#8 rubenhak   Members   -  Reputation: 154

Like
0Likes
Like

Posted 08 May 2012 - 12:41 PM

One more question. I declared the texture to store a byte integer per component, but in the shader a floating vector is being used: vec4 gl_FragColor. Is it safe regarding precision. If for example I want to write value "57" and i do normalize it: gl_FragColor.r = 57.0 / 255.0. When i read the value back during next pass and denormalize it by doing ".r * 255.0" would i get exactly 57 or something close to it?

#9 Ashaman73   Crossbones+   -  Reputation: 7789

Like
0Likes
Like

Posted 08 May 2012 - 11:12 PM

One more question. I declared the texture to store a byte integer per component, but in the shader a floating vector is being used: vec4 gl_FragColor. Is it safe regarding precision. If for example I want to write value "57" and i do normalize it: gl_FragColor.r = 57.0 / 255.0. When i read the value back during next pass and denormalize it by doing ".r * 255.0" would i get exactly 57 or something close to it?

To be honest, I don't know it, but I would not trust it. I would be careful with bitwise en/decoding. You can often encode single bits in the sign of a value, or in certain value ranges (0.0 - 0.2499 = first bit etc.). But I often encountered problems when you need some kind of blending or linear texture filtering.

#10 rubenhak   Members   -  Reputation: 154

Like
0Likes
Like

Posted 09 May 2012 - 01:16 AM

Is there any technique to debug shaders? How do I know if the value I write to render target is correct? How can I check if the value i read is correct?




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS