Encoding X,Y Into Pixel

Started by
3 comments, last by Daishim 17 years, 12 months ago
I'm using Cg and OpenGL to encode a texel's X,Y position into the color value. So that X is in the red component, and Y is in the green component. I'm having some issues with the ends of the texture, I'm assuming that I don't quite understand the dividing of pixels in OpenGL. If I understand correctly, each pixel x and y is divided by the texture width and height respectively to determine the OpenGL internal pixel position. So that X of 0 is 0.0 and X of 1 is X / Texture_Width. I'm using the following Cg code to encode the values:

float4 main(sampler2D inTex: TEXTURE0, in float2 InTexCoord: TEXCOORD0, uniform float Tex_Size) : COLOR
{
    float4 Encoded;

    Encoded = float4((InTexCoord.x * Tex_Size) / 255, (InTexCoord.y * Tex_Size) / 255, 1.0, 0.0);

    return Encoded;
}



For the particular situation, I'm using a 128x128 texture, and the right most pixel should get encoded with 127 (0..127 X position for every row), I would think? ... but I keep coming up with 128 being encoded into the right most position. Does anyone see where I might be going wrong with this? Thanks.

I know only that which I know, but I do not know what I know.
Advertisement
Well if your texture coordinates are mapped 0-1 and you mutliply 1 * texwidth whats that equal? Should be 128... and that value / 255 = .501 and that * 255 = 128
Quote:Original post by MARS_999
Well if your texture coordinates are mapped 0-1 and you mutliply 1 * texwidth whats that equal? Should be 128... and that value / 255 = .501 and that * 255 = 128


Ok, makes sense. I thought that 1.0 represented the right most boundary on the right most pixel. So for example, in a 4x4 texture, the lower left corner of the 0th pixel would be (0.0,0.0) and the 3rd (right most pixel of the first row) pixel would be (0.75, 0.00). I thought that was how texture coordinates were represented by default. Someone please correct me if I'm wrong.

I know only that which I know, but I do not know what I know.
After closer examination, it appears that the Cg code is encoding the right most pixel into the max integer size for the upper 24bits of a 32bit representation instead of being encoded as 127.

Here is a sample from the dump of the pixel data pulled from the framebuffer:
Index: 2165	X,Y: 16,117	Texture Y,X: 117,16Index: 2225	X,Y: 17,49	Texture Y,X: 49,17Index: 2226	X,Y: 17,50	Texture Y,X: 50,17Index: 2446	X,Y: 19,14	Texture Y,X: 14,19Index: 2558	X,Y: 19,126	Texture Y,X: 126,19Index: 2559	X,Y: 19,127	Texture Y,X: 4294967168,19Index: 2666	X,Y: 20,106	Texture Y,X: 106,20Index: 2667	X,Y: 20,107	Texture Y,X: 107,20Index: 2668	X,Y: 20,108	Texture Y,X: 108,20Index: 2741	X,Y: 21,53	Texture Y,X: 53,21


4294967168 also just happens to be -128 in signed form.

Anyone have any ideas?

[Edited by - Daishim on April 21, 2006 12:12:39 PM]

I know only that which I know, but I do not know what I know.
Ok, so I feel retarded. I was pulling the byte offset of the texture as a char * instead of an unsigned char *. However, this still gives me edge positions of 128 instead of 127, so I'm apparently still doing something wrong as far as texture coordinates go.

I know only that which I know, but I do not know what I know.

This topic is closed to new replies.

Advertisement