Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


amtri

Member Since 24 Apr 2007
Offline Last Active Dec 05 2014 12:21 PM

Posts I've Made

In Topic: Depth peeling and z-clipping

26 November 2014 - 03:43 PM

Yet another update: I changed the texture type to GL_TEXTURE_RECTANGLE. This way I can directly use gl_FragCoord.xy when passing data to the textures. I'm using the defaults, so I assume gl_FragCoord.xy will have values like 0.5, 1.5, 2.5, etc.

 

I store all my display data on display lists. I draw my display list once and save the color and depth buffer by create a frame buffer with GL_TEXTURE_RECTANGLE textures for both - one color, and one depth. The filters on both textures are set to GL_NEAREST.

 

I query the depth texture using

 

float depth = texture (depthtex,gl_FragCoord.xy).r;

 

My expectation is that this call should always find the exact same texel in my texture; so it writes the color and depth on one pass and reads with the exact same coordinates on the next path.

 

But, unfortunately, this is not the case. I "solved" my problem by using a tolerance such as

 

if(depth >= gl_FragCoord.z - 0.00001) discard;

 

on the second time I draw. The image looks perfect - pointing to the fact that this was indeed my problem. But

 

1) Why do I not get the exact same values on the second pass as the first pass, or... why does my comparison fails at times?\

 

2) Can I get better results by using different texture parameters?

 

Thanks.


In Topic: Depth peeling and z-clipping

26 November 2014 - 01:32 PM

After some more reading I suspect my problem has to do with the fact that I created a 2D texture to store the depth and then I'm comparing the depth value from a previous pass (stored in the texture) with gl_FragCoord.z. But in order to retrieve the previous value I need to pass to texture2D the texture coordinates between 0 and 1, and this rounding-off depending on screen size may just be returning to me the incorrect pixel. And with depth values probably extremely close to each other the comparison is failing, thus giving me the bad image I'm seeing.

 

This is all speculation, but if I'm right now I have more concrete questions that I'm hoping somebody can help me with:

 

1) Is there a way to query a sampler2D in the fragment shader given the center pixel location values - gl_FragCoord.xy? Of course, these will not go from 0 to 1, but from 0 to width and 0 to height. Given that gl_FragCoord will have values such as (1.5,3.5), I can easily pick the exact pixel. But if I need to map gl_FragCoord to texture coordinates between 0 and 1 I'm bound to run into this round-off issue.

 

2) My depth peeling algorithm requires multiple passes through my display list. This means that, at the same time, I need to both retrieve the depth and save it on another texture on pass, AND I need to accurately compare the depth to gl_FragCoord.z. I am using only sampler2D. Maybe I should be using sampler2DShadow instead to store my depth value. But from what I understand, querying the sampler2D using texture coordinates I get the floating point depth back; and querying a sampler2DShadow with a vec3 value returns only 0 or 1, depending on the result of the comparison.

 

What exactly should I be using in my fragment shader so I can both compare and store the depth value accurately?

 

3) Depending on the answer to the question above, what parameters should I be setting in the depth texture on the client side?

 

Thanks.


In Topic: z-buffer discontinuity in shaders

05 February 2014 - 01:56 PM

One more piece of information: I am drawing the 1D texture first, then the 2D texture.

 

If I reverse the order, then my 2D texture works fine, but my 1D texture does not.

 

And I know for a fact that my uniform flag indicating which texture I'm using is being passed down correctly, and it is using sampler1D for the 1D texture and sampler2D for the 2D texture. But, for some reason, after the first texture is drawn - either 1D or 2D - the second one in the same unit is not drawn.

 

Not clear why this is the case...


In Topic: z-buffer discontinuity in shaders

05 February 2014 - 01:50 PM

I've made some more progress on my own, and ran into a more concrete issue: I have both a 1D and a 2D texture on unit 0 (GL_TEXTURE0). I use one or the other by using glEnable/glDisable.

 

In my shader I have a sampler2D and a sampler1D. Both are assigned to unit 0.

 

Whenever I switch from the 1D/2D textures I send a uniform flag indicating which texture is being used. If it's the 1D texture I use texture1D using as first argument the sampler1D variable, and as second argument gl_TexCoord[0].s; if it's the 2D texture I use texture2D using as first argument the sampler2D variable, and as second argument gl_TexCoord[0].st.

 

My problem is that the 2D texture does not show.

 

By setting the output color in the fragment shader to vec4(gl_TexCoord[0].s,gl_TexCoord[0].t,0.,1.) I can tell that the 2D texture coordinates are correct. So my problem is that somehow switching to texture2D(sampler2D,...) after displaying the using texture1D(sampler1D,...) is not working.

 

If I turn off the shader program then the images are displayed as expected, so the client program is correct.

 

Any thoughts? Have I misunderstood how a single unit is to be used with multiple textures?

 

Thanks.


In Topic: z-buffer discontinuity in shaders

03 February 2014 - 05:53 PM

I could really use some help with what I believe is very basic. I created a framebuffer object, attached a depth and a color texture to it, and drew 2 triangles along with the text "Hello World" on top. I then took the 2 textures, extracted the data from them just to check that they were correct. They were: in one I see the RGB values, and in the other I see a single floating point number with values as expected given my triangles.

 

Then I bind the "0" framebuffer and draw 2 triangles to the screen with the color texture; not a problem. I see the image just as expected, so I'm confident that my textures are ok (at least the color one since I'm drawing with it).

 

Now comes the part I don't understand: all I'm trying to do is to use a shader to display the same image: no manipulation at this point as I'm getting my feet wet with textures.

 

This is what my vertex shader looks like:

void main()
{
   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
   gl_TexCoord[0] = gl_MultiTexCoord0;
}

And this is my fragment shader:

uniform sampler2D colortex;
void main()
{
   gl_FragColor = texture2D(colortex,gl_TexCoord[0].st);
}

The shader program compiles and links with no errors.

 

In the client code I have

GLuint colortexloc = glGetUniformLocation (prog,"colortex");
glUniform1i (colortexloc,0);

Everything else is the same. I expected the image to be displayed just as before. Can somebody point towards something that's obviously wrong or missing? If I comment out the "glUseProgram" call then the image is properly displayed again. So it does look like a problem with the trivial shader code above.

 

Any help is appreciated. Thanks!


PARTNERS