Sign in to follow this  

GLSL integer support - also #version

This topic is 3298 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Exactly what determines whether GLSL supports integer variables in my shader programs are executed as integers by the GPU hardware? Apparently some versions of GLSL let vertex/fragment programs contain integer variables, but GLSL keeps the values in floating point internally (leading to strange behavior sometimes). How do I know whether my GLSL integer variables are being processed as integers in the GPU hardware? My engine is passing flag bits in a vertex variable, and the vertex shader passes this value to the fragment shader. Sometimes I get strange results that I traced to wackiness in the interpolation process. For example, all the vertices in the VBO being rendered have a value of 12.0000 in the flags variable, and my code tests the flags like this (until I definitely have real integers to bit-test properly!!!):
if (flag >= 8.0) {
  flag = flag - 8.0;
//
// processing specified by bit #3 set
//
  if (flag >= 4.0) {
    flag = flag - 4.0;
//
// processing specified by bit #2 set
//
    if (flag >= 2.0) {
      flag = flag - 2.0;
//
// processing specified by bit #1 set
//
      if (flag >= 1.0) {
        flag = flag - 1.0;
//
// processing specified by bit #0 set
//
However, the behavior of the shader changes randomly at various pixels. That problem vanished when I changed the 8.0 values in the first lines to 7.9999 !!! Apparently when all 3 vertex shaders output values of 8.0 to the fragment shader, the fragment shader sometimes gets values less than 8.0 !!! Yikes! I can't wait to have real integers in my GPU shaders! I assume integers can be passed from vertex shaders to fragment shaders, right? Presumably integers will not exhibit strange (and erroneous) interpolation artifacts like illustrated in the above code segment. ----- Separately, in trying to figure out the above, I noticed that GLSL generates a compile-time error when I put a #version directive in both my vertex and fragment shaders, even when the version number is the same in both (#version 110, #version 120, #version 130). When the directive is in only one shader, it works okay. Isn't the #version directive supposed to be supported in both shaders?

Share this post


Link to post
Share on other sites
I believe that interpolation is done with floats.
Integers are however supported within the shader iteself, for example if you want to have a for loop and a counter (int i).
SM 3 GPUs definitly support integer variables. I am not sure about SM 2 GPU but still, you can still have a for loop on SM 2 GPU and the compiler will properly deal with it.

What you can do is
float value = floor(flag+0.4);

I use #version 110 in all my shaders and it works fine. What's the error message?

Share this post


Link to post
Share on other sites
FWIW the way I use bits / bytes for flags in glsl is to upload the data as the UB colour component of interleaved vertex data. That way the data transfer is nice an fast.

I can then align 0x00 with 0, 0x40 with 0.25, 0x80 with 0.5, and so on...

I generally use values slightly less or more than that and do a greater than or less than check on the colour values in the shader to do logic...

It seems to work and is not too slow...


Share this post


Link to post
Share on other sites

This topic is 3298 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this