Jump to content

  • Log In with Google      Sign In   
  • Create Account

amtri

Member Since 24 Apr 2007
Offline Last Active Dec 05 2014 12:21 PM

Topics I've Started

Depth peeling and z-clipping

24 November 2014 - 01:57 PM

Hello,

 

I have a shader where I create several frame buffers and perform N passes through my display list. In the first pass I simply store the image and depth; in the second and all subsequent passes in the fragment shader I compare the current depth with the previously saved one and discard if it's in front of it. Then I blend all images in a final post-processing phase at swap time.

 

This all works fine, except for when there's z-clipping going on. I saved both the color and depth textures associated with each frame buffer and save all of them to a bmp file so I can see what's going on. I have just a few intersecting cones being displayed. In the area where z-clipping occurs both the depth and the color textures seem to have garbage defined on them.

 

I'm wondering whether there's something I'm not aware of regarding the data coming in. For example, I assume that gl_FragCoord.z is always between 0 and 1, and anything that's being clipped won't even go through the fragment shader. Is this the case? Or is z-clipping done after the fragment shader? In this case, should I expect gl_FragCoord.z < 0 or > 1 to be allowed?

 

I'm attaching the saved color texture. The solid colors are what I expect to see. The fuzzy area is exactly where the z-clipping plane is passing through. Any thoughts?

 

Thanks.


clipping plane with shader

27 February 2014 - 01:05 PM

Hello,

 

I have clipping planes working in my shader using gl_ClipDistance. I pass the equation of every clipping plane to my vertex shader and compute the dot product with my untransformed coordinates. It works fine.

 

Now I ran into a machine where the call

 

glGetString (GL_SHADING_LANGUAGE_VERSION)

 

returns 130. Yet, gl_ClipDistance is an undefined variable. This brings up my first question:

 

1) Shouldn't gl_ClipDistance always be defined for GLSL version 130? I'm using a Linux 64-bit machine with glew 1.10.0.

 

But I want to get this working, so I decided to alternatively use gl_ClipVertex instead on that machine. The problem is I'm getting no image - kind of like everything is always being clipped out. Given that my glClipPlane is setting the plane equation in untransformed coordinates (i.e., same as gl_Vertex), I'm simply setting gl_ClipVertex = gl_Vertex.

 

In my gl_ClipDistance implementation I'm taking the dot product of gl_Vertex with the 4 numbers that define the equation in glClipPlane. It works fine. But somehow I think my clipping plane equations are not being properly passed down. I even tried setting "#version 120" in the shader, but with no effect.

 

Can anybody shed some light onto this?

 

Thanks.


How to pass GL_BYTE to vertex shader?

17 January 2014 - 06:32 PM

Hello,

 

If I have an array of 3 bytes per vertex - which I'm padding to 4 bytes to keep each vertex data in a 4-byte word - and pass it down to the vertex shader as a GL_UNSIGNED_BYTE data type, how does it appear in the shader? Does it get automatically converted into 4 floats? In other words, can I use the data in the shader as a vec4?

 

Also, will the float be normalize between 0. and 1., or will it be converted into a number from 0. to 255.?

 

If anybody has a link with a reference on how these non-integer, non-float data are passed down to the vertex shader I would appreciate it.

 

Thanks.


shader use of glUniform1fv very slow

13 January 2014 - 02:03 PM

Hello,

 

I need to draw hundreds of thousands of cubes, each centered at a different location and of different size.

 

I first drew the cubes as 12 triangles (2 per cube face), using glDrawArrays. To draw this I need to pass 12*9 floating point coordinates to the graphics card.

 

I then thought of speeding this up using the following algorithm:

 

1) Use glVertexPointer on a standard unit cube centered at zero

 

2) Then, for each cube, I pass its center and a scaling parameter with glUniform1fv. I use a vertex shader the parses these 4 numbers by scaling and translating each coordinate component. The glDrawArray command always draws the same unit cube, but the shader program will take care of positioning each point in its proper location.

 

The algorithm works: I do get my cubes with the right size and at the location I want them. And I am passing only 4 floating point numbers per cube, rather than 108. Yet, this process is probably about 100 times slower than before.

 

I narrowed the problem down to the call to glUniform1fv. If I call it just once, rather than once per cube, I get my performance back. Of course, the cubes are not in the right location and are not of the right size, but at least I know the culprit.

 

Can anyone shed some light on why there is such a loss of performance in this function? And, better yet, a suggestion on how to really improve my performance with an algorithm like this to the point that it's better than my original triangulation?

 

I'm puzzled by this loss of performance when I'm sending fewer points to the graphics card.

 

Thanks.


z-buffer discontinuity in shaders

23 December 2013 - 05:43 PM

Hello,

 

I would like to highlight edges based solely on z-buffer discontinuities. I can imagine this as an image posprocessing step after the frame and z-buffers are complete.

 

I have two questions:

 

1) What is a reasonable filter to use to detect edge pixels?

 

2) Can this be implemented in a shader? One of my questions here is whether I can access neighboring pixel information in the fragment shader.

 

Thanks.


PARTNERS