Public Group

# How to get the range of a pixel in a fragment shader

This topic is 2785 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi,

I would like to know how I can get the range (depth) of a fragment ? I want to do so to generate a range image. I am using Shaders in GLSL through JAVA3D.

I have tried different strategies (such a declaring a varying position in the vertex shader and assigning it gl_ModelViewMatrix
* gl_Vertex) and then computing length(position) in the fragment shader. I then encode this distance as 18 bits (RGB) in the image. This part work correctly (I tested it with a constant value and it works OK).

Here is my vertex shader code:

varying vec4 position;

void main(void)
{
position = gl_ModelViewMatrix * gl_Vertex ;
gl_Position = ftransform();
}

Here is my fragment shader code:

varying vec4 position;

vec4 convertRangeToColor(float range)
{
float resolution = 0.01;
int distanceCounts = floor(range/resolution);

float r = (((distanceCounts >> 12) & 0x0000003F) << 2)/ 255.0;
float g = (((distanceCounts >> 6) & 0x0000003F) << 2) / 255.0;
float b = ((distanceCounts & 0x0000003F) << 2)/ 255.0;

return vec4(r, g, b, 1.0);
}

void main(void)
{
}

I have created a 2D plane that I look at. I get the data and plot it in 3D and I do not get the proper shape (see snapshot attached).

It seems that this approach (the varying position) does not work correctly. What I am doing wrong ?

##### Share on other sites
It makes sense that the distance to the view-space corners would be further than the centre of the plane - point your arm straight out in front, then point at an angle (you don't reach as far at an angle).

If you just want 2D depth into the screen use [font="Courier New"]position.z[/font] instead of [font="Courier New"]length(position)[/font], or alternatively you can just use [font="Courier New"]gl_FragCoord.z[/font], which automatically contains the rasterized depth of this fragment

BTW integer math and bitwise logic in shaders is only available in fairly new hardware. Fragment programs traditionally do best with floating point data. A more traditional encoding method looks like:// assuming 'range' is a fractional number in the range of 0.0 to 1.0: float r = range;//is written out to an 8-bit fixed point render target. Stores the float at 1/256th resolution. float g = fract(range * 256);//bring the next 8-bits of data up into the stored resolution. Use fract to throw away the bits already stored in r float b = fract(range * 256 * 256);//...and the next 8 bits //now you've got a 24bit fixed point representation of the input fraction //alternatively: vec3 rgb = fract( range * vec3(1.0, 256.0, 65536.0) ); //the decoding shader math looks like: float range = dot( rgb, vec3(1.0, 1.0/256.0, 1.0/65536.0) );

##### Share on other sites
Hi Hodgman, thanks for your quick post !

I have implemented the range encoding function that you provided and I have a question.

Here is my update fragment shader :

{
float range = radius / 1000.0;
vec3 rgb = fract( range * vec3(1.0, 256.0, 65536.0) );

return rgb;
}

void main(void)
{
}

So I assume that my maximum range is less than 1000.0 (ok for my application).

So I would expect that if my range is 500.0, I would get a range encoded as fixed point to be 2^24 / 2 (decimal 8388608)? I tested it (set the radius to 500.0) and got decimal 8323072. What am I doing wrong ?

##### Share on other sites

So I would expect that if my range is 500.0, I would get a range encoded as fixed point to be 2^24 / 2 (decimal 8388608)? I tested it (set the radius to 500.0) and got decimal 8323072. What am I doing wrong ?
Hmm.. you've just demonstrated some logic and/or rounding-based errors in my encoding function!

The value of 0.5 when written to your 8-bit render target will be converted to 127.5, and then truncated to 127. This results in RGB = 7F 00 00 = 8323072.
We actually want 0.5 to be converted to 127.5 and then rounded up to 128, resulting in RGB = 80 00 00 = 8388608...

My first thought is to overcome this truncation by adding:rgb += vec3(0.5 / 255.0)...but I'll have to go back and re-think my logic in depth now...

##### Share on other sites
The lenght of a vector is not equal to its depth by the way, it only gives you the magnitude in view space.

This might be a useful post for you http://mynameismjp.w...ion-from-depth/

##### Share on other sites

[quote name='Pete01' timestamp='1304523155' post='4806439']
So I would expect that if my range is 500.0, I would get a range encoded as fixed point to be 2^24 / 2 (decimal 8388608)? I tested it (set the radius to 500.0) and got decimal 8323072. What am I doing wrong ?
Hmm.. you've just demonstrated some logic and/or rounding-based errors in my encoding function!

The value of 0.5 when written to your 8-bit render target will be converted to 127.5, and then truncated to 127. This results in RGB = 7F 00 00 = 8323072.
We actually want 0.5 to be converted to 127.5 and then rounded up to 128, resulting in RGB = 80 00 00 = 8388608...

My first thought is to overcome this truncation by adding:rgb += vec3(0.5 / 255.0)...but I'll have to go back and re-think my logic in depth now...
[/quote]

##### Share on other sites

The lenght of a vector is not equal to its depth by the way, it only gives you the magnitude in view space.

This might be a useful post for you http://mynameismjp.w...ion-from-depth/

What I am trying to achieve is to simulate a LIDAR (a distance ranging instrument) using a shader. So my thought is to make use of the fragment shader to get the 3D distance of the pixel relative to the viewpoint.

What I am struggling with (I am new to GLSL and far from being an expert in computer graphics) is how to transform from the pixel coordinates back to 3D position in the view point reference frame. I do not need the actual x,y,z coordinates of the point but only the distance from the viewpoint to the point (this is what the sensor I am trying to simulate provides).

I have read through the post you provided. I am using a single pass shader, so I am not sure if what is proposed would work for me ?

I assume that the float3 VSPositionFromDepth(float2 vTexCoord) function return the position of the pixel in the view coordinates system and that if I take the length of that vector I would get the range I am looking for ?

##### Share on other sites

[quote name='NightCreature83' timestamp='1304585487' post='4806801']
The lenght of a vector is not equal to its depth by the way, it only gives you the magnitude in view space.

This might be a useful post for you http://mynameismjp.w...ion-from-depth/

What I am trying to achieve is to simulate a LIDAR (a distance ranging instrument) using a shader. So my thought is to make use of the fragment shader to get the 3D distance of the pixel relative to the viewpoint.

What I am struggling with (I am new to GLSL and far from being an expert in computer graphics) is how to transform from the pixel coordinates back to 3D position in the view point reference frame. I do not need the actual x,y,z coordinates of the point but only the distance from the viewpoint to the point (this is what the sensor I am trying to simulate provides).

I have read through the post you provided. I am using a single pass shader, so I am not sure if what is proposed would work for me ?

I assume that the float3 VSPositionFromDepth(float2 vTexCoord) function return the position of the pixel in the view coordinates system and that if I take the length of that vector I would get the range I am looking for ?
[/quote]

It seems that what you want is a visualisation of the depth buffer the article I provided before shows you how to construct this in HLSL which isn't too much different from GLSL, all : "" postfixed variables in HLSL are varyings.

1. 1
2. 2
Rutin
19
3. 3
khawk
18
4. 4
5. 5
A4L
11

• 12
• 16
• 26
• 10
• 44
• ### Forum Statistics

• Total Topics
633767
• Total Posts
3013739
×