Sign in to follow this  
gatisk

Depth of field and semitransparent objects ??

Recommended Posts

Hello. I have a basic understanding about real time depth of field effect. I know how I should render my geometry and output depth-blur information into alpha channel and finally blend between sharp image and downsampled version using these values. So far so good, but what to do with semitransparent objects (alpha blended objects) - smoke trails, particles etc. ??? First, I am using alpha channel for blending and therefore can't store depth-blur values there. Second, how to blur these objects?? They are semitransparent and it means we can see some background objects through them. Any ideas??

Share this post


Link to post
Share on other sites
Like usual, there are multiple ways to do it. But the most common way would probably be:

1.- Measure the "blur factor" per pixel (usually based on depth). Store this data in a texture (via a FBO for example).
2.- Measure where the player is focussing on (you could calculate the distance between the camera position and the pixel(depth) in center of the screen for example, or take an average from the center pixels).
3.- Render the scene normally
4.- Take a snapshot of the "normal screen", and blur it.

These depth values are usually placed into a texture specially used for the blurring. So that one has nothing to do with alpha-channels and blending. One way would be by rendering your scene with a shader that gives each pixel its distance to the camera as a value. These are floating point values, so its a good idea to store these values in a FBO with a floating point format.

In this pass, you can do more than just only output the distances. You could skip transparent objects if you like. Or you could give them a lower "blur factor" or something.

After that, you need to combine the normal scenery with a blur. You could do this by storing the screen contents in another texture, and then add a blurred texture on top of it. This way, its easy to increase/decrease blur, or just disable it. The blurring is done via that "depth texture" you made before in that special pass, so it has nothing to do with blending.

However, if the transparent object (let's say a window) was rendered in that pass, the camera might be focussed on the window when looking straight at it. That means the window itself won't be blurred, but the things behind it might have more blur, since they are further away from your focussing point. If you don't render the window in that pass, then you look right trough it, and the focussing point will be somewhere at the stuff behind that window. In that case the stuff behind is rendered sharp, and the window as well. But the surface around the window (and also the window pixels that are not at the center) will be blurred though. You could increase/reduce the blur if you provide extra information in the "depth measurement pass" if you like though, so you can adjust it.

greetings,
Rick

Share this post


Link to post
Share on other sites
Thanks for the answer. So I understand we render only solid geometry as depth values in separate buffer. After that we render everything normally including our transparent objects and create blurred texture from that. Finally we blend sharp and blurred versions together using depth information. So it means semitransparent objects will be blurred as much as solid objects behind them?? (because each pixel represents single depth) I don't think it will look correct.

Share this post


Link to post
Share on other sites
I think,separate depth,sharp and blured textures
needed for solid and transparent objects.
But there are two problems:
1) Several transparent meshes in some screen space.
2) In case of autofocus-how to make choice.
Transparent in front of or solid/transparent behind?
As to me-I don't include transparent to depth map or
do it manually-it depends from place,solid meshes around etc.

Share this post


Link to post
Share on other sites
There isn't really a great general purpose solution to this that's very fast.

You could render the opaque stuff in your scene, and do the downsampling & DOF blur to your target buffer. Then, for each semi-transparent thing (in sorted z order), render it sharp over the opaque stuff, and re-do your downsampling & DOF (presumably clipped to just nearby the newly drawn thing). Not fast, but I think about as close to "correct" as you're going to get.

You also could try rendering all opaque, then render your semi-transparent stuff using the depth value as a texture mip bias factor. Wouldn't handle edges correctly, but it'd be much faster.

Anyways, there are a number of other approaches that span the difference between those two in terms of complexity and speed. It all depends on what's most important for you.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this