z-buffer discontinuity in shaders

Started by
15 comments, last by amtri 10 years, 2 months ago

Hello,

I would like to highlight edges based solely on z-buffer discontinuities. I can imagine this as an image posprocessing step after the frame and z-buffers are complete.

I have two questions:

1) What is a reasonable filter to use to detect edge pixels?

2) Can this be implemented in a shader? One of my questions here is whether I can access neighboring pixel information in the fragment shader.

Thanks.

Advertisement

You can access neighboring pixel information in a shader by using the texture function (texture2D is deprecated now) or even other functions like texelFetch. You can look at the documentation and use the "See Also" section from texture to get you started.

If you need examples of edge detection shaders, just google edge detection fragment shader. Taking a quick look at the Google results, it looks like you'll get enough results from that search.

Richard,

Apologies for the delay, but I had to go do my homework first...

I've looked into edge detection fragment shader and many, many questions came up. I was hoping somebody would be able to help me make sense of all this.

1) To start with, I need to access the z-buffer values in the frament shader; not only for the current pixel, but for adjacent pixels also. From what I understand the only way to access z-buffer information in the fragment shader for adjacent pixels is to have the z-buffer written to a texture.

Could somebody confirm this? Sounds like a silly thing, given that the z-buffer is already in the graphics card memory, so I figured there should be a way to access the data - adjacent pixels included.

2) Assuming a texture is indeed needed, then there are 2 parts (or more) to this procedure: (a) creating the texture - i.e., the client code - and (b) using it in the shader. Can somebody show me the necessary client steps I will need to generate the texture? Also, as I rotate my model and redraw, I assume the texture I am creating gets automatically updated with whatever I do. Is this the case?

3) I think that once I get the client code done I can probably figure out what goes in the fragment shader, but if somebody has already done something similar and can give me a head start I would truly appreciate it.

Thanks.

If you want to do interesting things like edge detection or other postprocessing effects then you must render your scene in a framebuffer object you created. The default framebuffer will not do. Also, you need to render your scene into textures, not renderbuffers. Have a loof at this page for an introduction. If you have any questions after reading that page, feel free to but I'd say most of your current questions are either answered or invalid.

Any render to texture tutorial should work as a starting point I'd think. Or you can use the link BitMaster kindly provided.

I don't know any other way you'd want to do it, although I also don't know everything. And you definitely don't want code from a production code base even if anyone were allowed to provide that.

Richard and BitMap,

Thanks for your responses. You have pointed me to enough material now that I need to go into a corner and do my homework. I'm sure there will be more questions later - hopefully, inteligent ones :-)!

Thanks again.

1) What is a reasonable filter to use to detect edge pixels?

A sobel filter.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Spiro,

Thanks for the suggestion. What I've settled for right now is a rather very simple algorithm: since I have the z-buffer information, then for every pixel I compute a "gradient" g by using the formula

g = z_(i,j) - 0.125*[ z_(i+i,j) + z_(i+1,j+1) + z_(i+1,j-1) + z_(i,j-1) + z_(i,j+1) + z_(i-1,j) + z_(i-1,j+1) + z_(i-1,j-1) ]

Then I subtract an equal amount from every color component based on the value of "g" above. I chose to only darken areas further away while leaving closer areas unaltered. It works very well for my needs.

Now what I'm grappling with is the fact that my full image contains not only true geometry, but also labels which I want to leave unaltered. My thought is to separate true geometry and everything else into two frame buffer objects (with z buffering on both), applying the image processing on the geometry buffer at swap time, then combining both buffers into the true frame buffer at the end.

I still don't know how to do this, but I'm pretty sure it's doable and not difficult; I just need to study FBOs which are still a novelty to me.

Thanks!

[quote name='amtri' timestamp='1391099205' post='5127492']
Now what I'm grappling with is the fact that my full image contains not only true geometry, but also labels which I want to leave unaltered. My thought is to separate true geometry and everything else into two frame buffer objects (with z buffering on both), applying the image processing on the geometry buffer at swap time, then combining both buffers into the true frame buffer at the end.


You can simply draw the labels on top of the postprocessed true geometry, no need of another FBO,

Omae Wa Mou Shindeiru

Lorenzo,

You are right: I could simply draw the labels on top of the postprocessed true geometry.

The problem is that I do not control the order in which graphics data comes in. It could be geometry/geometry/label/label, or it could be geometry/label/geometry/label, etc. So what I will do is to direct the geometry to one FBO, the labels to another, and in this way not worry about the order in which they come in.

When everything is done I post-process the true geometry, then add the FBO with the labels on top of the final post processed image.

I thought about enforcing some ordering for when geometry and label data is sent, but I simply have no control over it in the entire application; there's just too much going on in other parts of the code, so I need to accomodate it.

At swap time I'll combine them.

Thanks!

This topic is closed to new replies.

Advertisement