well i am using my code i showed you a litle bit modified coz it had bugs, since i can define a 'vector' that represents value from minimum to maximum (0..1) my function works fine

If you're still interested in using floats as texture data directly, i.e. lying to GL that your block of memory containing floats is RGBA texture data and reconstructing the floats in the fragment shader, I've corrected the function I posted above so it will convert a float read as texture data into the original float. I've restricted it to use only language constructs that are available in the version of GLSL associated with ES 2.0 and wrapped it in a little program so you can play around with it.

#include <stdio.h> #include <stdint.h> #include <glm/glm.hpp> static const float TwoExp23 = exp2(23.0f); float tex2float(glm::vec4 tex) { tex *= 255.0f; // Scale texture values so we can more easily deal with bits. int exp = (tex.a > 127.0f ? tex.a - 128.0f : tex.a) * 2.0f + (tex.b > 127.0f ? 1.0f : 0.0f); // 7 LSB of a and 1 MSB of b. if (exp == 0.0f) // Exponent is zero for zero and for very small numbers (< 1 * 2^-126) so treat all these as zero. return 0.0f; if (exp == 255.0f) // Exponent == 255 means + or - infinity and NaN. We deal with +/- infinity here but you can probably ignore it. return (tex.a > 127.0f ? -1.0f : 1.0f) / 0.0f; float sig = tex.b > 127.0f ? tex.b - 128.0f : tex.b; // Significand is the 7 LSB of b... sig = sig * 65536.0f + tex.g * 256.0f + tex.r; // ...followed by g and r. sig = 1.0f + sig / TwoExp23; // Scale to < 0 and add the implied 1 at the beginning. if (tex.a > 127.0f) // If the sign bit is set negate the significand. sig = -sig; return sig * exp2(exp-127.0f); // Multiply by the exponent (with the exponent bias applied). } int main(int argc, char **argv) { float f1 = -1234.5678f; uint8_t *bytes = reinterpret_cast<uint8_t*>(&f1); glm::vec4 tex = glm::vec4(bytes[0], bytes[1], bytes[2], bytes[3]); tex /= 255.0f; // Texture values are normalized to 0..1 so scale down the bytes to fit. float f2 = tex2float(tex); printf("f1 = %f\n", f1); printf("f2 = %f\n", f2); return 0; }Having said that, you probably ought to check the precision of texture data (using glGetIntegerv with GL_x_BITS where x is RED, GREEN, BLUE or ALPHA) to make sure they are all 8. If they aren't, you won't be able to use floats directly and will have to do some sort of conversion after all.

Additionally, floats in fragment shaders might not have the full precision that they do on desktop hardware - they are allowed to have as few as 10 binary digits (which translates to just over 3 decimal digits). So don't be surprised if the reconstructed floats don't match the input ones exactly.

You can find the OpenGL ES documentation describing all this at the Khronos site.