Jump to content

  • Log In with Google      Sign In   
  • Create Account


[GLSL] Store int ID in 16-bit float? [solved]


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
3 replies to this topic

#1 theagentd   Members   -  Reputation: 539

Like
0Likes
Like

Posted 16 December 2013 - 07:01 AM

Hello.

I have a problem I haven't quite managed to come up with a solution for. I'd like to store a primitive ID value in my G-buffer for implementing SRAA. I want to have as many unique IDs available as possible to avoid "collisions" where two touching triangles have the same primitive ID due to overflow. Therefore I really want to utilize all 65536 values in the 16-bit floating point texture channel. If I have a value 0-65536, is there any way to map each int value to a unique 16-bit floating point value? The stored IDs will then be used for equals comparisons, and those comparisons can be done in the packed float format, so there's no need to convert the value back to an integer again when reading the ID.

 

Although I found the intBitsToFloat() function in GLSL I'm not sure it'll work as I expect when using a 16-bit floating point render target to store the result in. Additionally, this function is only available in GLSL 3.30 which is unavailable on Mac.

 

I don't mind an approximate solution as long as as much precision is retained as possible for the ID. It's especially important that following IDs do not map to the same 16-bit floating point values (for example 1 and 2 both map to X), since triangles with similar IDs will most likely be close to each other on screen, and I need to be able to differentiate between them.



Sponsor:

#2 Chris_F   Members   -  Reputation: 2000

Like
0Likes
Like

Posted 16 December 2013 - 07:36 AM


Additionally, this function is only available in GLSL 3.30 which is unavailable on Mac.

 

Not so. Anyone running OS X 10.6 or newer can upgrade to Mavericks for free. OS X 10.9 supports OpenGL 4.1 and 3.3.



#3 pcmaster   Members   -  Reputation: 647

Like
0Likes
Like

Posted 16 December 2013 - 10:04 AM

And why exactly cannot you have a 16-bit UINT target, that is GL_R16UI + GL_RED_INTEGER, instead of float, GL_R16F?



#4 theagentd   Members   -  Reputation: 539

Like
0Likes
Like

Posted 16 December 2013 - 11:21 AM

 


Additionally, this function is only available in GLSL 3.30 which is unavailable on Mac.

 

Not so. Anyone running OS X 10.6 or newer can upgrade to Mavericks for free. OS X 10.9 supports OpenGL 4.1 and 3.3.

 

Ah, I didn't know that. In that case it seems like there are quite a few folks still "stuck" on OS X 10.6, in my experience at least... I guess I'll just have to test it then.

 

EDIT: I tried it out but it didn't work at all. After looking up how half floats use their bits I ended up with something that at least works:

pow(2.0, float(gl_PrimitiveID%40000) / 1200.0 - 15.0)

This is far from optimal of course; it doesn't use the sign bit for example, but the result is good enough. Plus, it doesn't require GLSL 3.30.

 

 

 

And why exactly cannot you have a 16-bit UINT target, that is GL_R16UI + GL_RED_INTEGER​, instead of float, GL_R16F?

Because I'm trying to pack the ID into the alpha channel of a GL_RGBA16F texture. The user will be able to choose between using FXAA in which case it'll be used for luminance, or SRAA in which case it'll store a primitive ID.


Edited by theagentd, 16 December 2013 - 01:53 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS