Jump to content
  • Advertisement
Sign in to follow this  

Is integer gets truncated?

This topic is 2566 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

It is code part from GLSL fragment shader (Unity 3.3), (GLSL 2.0 I guess)

I'm passing 4 bytes encoded into RGBA, and emulating bit shifting to get them back;
(32bit depth, on C# side it is int32 value)

int depth2_i = ((color_full.x * 255) * 16777216) + ((color_full.y * 255) * 65536) + ((color_full.z * 255) * 256) + 0;
float depth2 = depth2_i/2147483647f;

Inside C# code, depth2 returns 0.49..... after division, as it should.
Inside GLSL shader, I get black pixel (0)

I assume, that my (int depth2_i) gets truncated, same goes for float depth2_i .

Is that GLSL limitation?

Share this post

Link to post
Share on other sites
According to the spec, truncation should occur
When constructors are used to convert a float to an int, the fractional part of the floating-point value is dropped.[/quote]However, there's no reason to use an [font="Courier New"]int[/font] here. Try using:float depth2_i = floor( ... )Also, all of your constants should probably be floats (e.g. [font="Courier New"]255.0[/font] instead of [font="Courier New"]255[/font]), and, [font="Courier New"]2147483647f[/font] is not a valid constant -- if you're going to use the optional '[font="Courier New"]f[/font]' suffix, it has to come after a '[font="Courier New"].[/font]' (or an '[font="Courier New"]e[/font]').

Integers are pretty unreliable according to the GLSL spec, so you should just stick to using [font="Courier New"]float[/font]s unless you have a good reason otherwise.
Integers are mainly supported as a programming aid. ... there is no requirement that integers in the language map to an integer type in hardware. ... Because of their intended(limited) purpose, integers are limited to 16 bits of precision, plus a sign representation in both the vertex and fragment languages. An OpenGL Shading Language implementation may convert integers to floats to operate on them. An implementation is allowed to use more than 16 bits of precision to manipulate integers. Hence, there is no portable wrapping behavior. Shaders that overflow the 16 bits of precision may not be portable.[/quote]The bold bits show that your code is allowed to behave differently on different GPUs, because you're requiring more that 16-bits of precision from your integer value. So as mentioned above, stick to [font="'Courier New"]float[/font] and use [font="'Courier New"]floor[/font] to perform the truncation.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!