Sign in to follow this  

Is integer gets truncated?

This topic is 2354 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

It is code part from GLSL fragment shader (Unity 3.3), (GLSL 2.0 I guess)

I'm passing 4 bytes encoded into RGBA, and emulating bit shifting to get them back;
(32bit depth, on C# side it is int32 value)

[CODE]
int depth2_i = ((color_full.x * 255) * 16777216) + ((color_full.y * 255) * 65536) + ((color_full.z * 255) * 256) + 0;
float depth2 = depth2_i/2147483647f;
[/CODE]



Inside C# code, depth2 returns 0.49..... after division, as it should.
Inside GLSL shader, I get black pixel (0)

I assume, that my (int depth2_i) gets truncated, same goes for float depth2_i .

Is that GLSL limitation?

Share this post


Link to post
Share on other sites
According to [url="http://www.google.com.au/search?q=glsl+spec"]the spec[/url], truncation should occur[quote]When constructors are used to convert a float to an int, the fractional part of the floating-point value is dropped.[/quote][b]However[/b], there's no reason to use an [font="Courier New"]int[/font] here. Try using:[code]float depth2_i = floor( ... )[/code]Also, all of your constants should probably be floats ([i]e.g. [font="Courier New"]255.0[/font] instead of [font="Courier New"]255[/font][/i]), and, [font="Courier New"]2147483647f[/font] is not a valid constant -- if you're going to use the optional '[font="Courier New"]f[/font]' suffix, it has to come after a '[font="Courier New"].[/font]' ([i]or an '[font="Courier New"]e[/font]'[/i]).

Integers are pretty unreliable according to the GLSL spec, so you should just stick to using [font="Courier New"]float[/font]s unless you have a good reason otherwise.[quote][b]Integers are mainly supported as a programming aid[/b]. ... there is no requirement that integers in the language map to an integer type in hardware. ... Because of their intended(limited) purpose, integers are limited to 16 bits of precision, plus a sign representation in both the vertex and fragment languages. An OpenGL Shading Language implementation may convert integers to floats to operate on them. An implementation is allowed to use more than 16 bits of precision to manipulate integers. Hence, there is no portable wrapping behavior. [b]Shaders that overflow the 16 bits of precision may not be portable[/b].[/quote]The bold bits show that your code is allowed to behave differently on different GPUs, because you're requiring more that 16-bits of precision from your integer value. So as mentioned above, stick to [font="'Courier New"]float[/font] and use [font="'Courier New"]floor[/font] to perform the truncation.

Share this post


Link to post
Share on other sites

This topic is 2354 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this