I have a question about the "type" parameters in function glTexImage2D. Since my original texture data is "short" and has negative value, I use Gl_SHORT first. But, it seems that the negative value in shader is clamped to zero. My solution is: first map the "short" to "unsigned short". Then, inside the shader, map the value back. This solution works.
My question is:
If GL_SHORT can only represent the value between [0, 32767] when the negative value is clamped, GL_UNSIGNED_SHORT with range [0, 65535] would be better choice at any cases, right? not just GL_SHORT but also GL_INT.
Is there some way I can pass the negative value without mapping to positive into the shader?
There are some ambiguities in your post that should be cleared up. First, a GL_SHORT type can hold negative values. It is not restricted to positive values. However, when converting an integer color value to a color of type GLclamp (the normal [0, 1] range for low dynamic range color), it maps integer 0 to 0.0, and the maximum possible integer value to 1.0. So a GL_SHORT can only represent low dynamic range coor values in the range [0, 32k]. It can store negative values, but they are all mapped to color value 0.
So don't confuse clamped color values and GL_SHORT and its ability to store negative values.
The key here is to store color values in a format that is not clamped. I am not sure at the moment, but I think it is enought o change the internal format to either an integer format or a floating point format. For example, the internal format GL_RGBA32F should make a non-clamped RGBA floating point texture, or GL_RGBA16I for a 16-bit signed integer texture.
For GL_SHORT, although it can store negative value, if these values are clamped, it will become meaningless, right? I just wonder whether there are some reasons for GL_SHORT's existence, or we can just leave it there for future support.
GL_RGBA32F may cost too much memory for me. I try GL_RGB16I, but it output nothing. Is there something wrong about my usage. I use like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16I, _dim, _dim, 0, GL_RGB, GL_SHORT, _vel);
The values are clamped if they are converted to a clamped color format. So you have to go a route that doesn't clamp the color values. The existance of GL_SHORT is simply so that you can pass 16-bit signed values to various functions.
If you use an integer format or a floating point format, you have to treat the values as such as well. For example, if you write a floating point color value as the output fom the fragment shader, it will be clamped to [0, 1], and an integer format may have to be normalized to the [0, 1] range (divide by 215) since they are read as integers and not clamped low dynamic range color values in a shader.