DOF, Refraction, Particles, Compromises.

Started by
13 comments, last by BEM 15 years, 2 months ago
There are lots of cool effects out there and individually they are pretty straight forward. What usually isn't explained is how you fit two or more of them together. In reality you probably can't, so the question becomes what can you get away with? For example: Depth Of Field: pretty straight forward if you are only considering opaque objects. But how do you handle a transparent window in a DOF-blurred building? Refraction and heat effects: Pretty straight forward until they overlap, or are mixed with particles. With a single plane of water you could sort particles into infront and behind. Multiple heat haze effects could be accumulated if there are not particles between them. Particles and transparency: Pretty straight forward, apart from the above. I have been banging my head against this for a while. Is there any general way of fitting these together, or otherwise what constraints do you put up with and how does it not become a problem?
Advertisement
Ive been experiencing similar problems!

I have a wall with 5 windows , each with a different post process ( e.g. glass distortion, heat haze etc)

Just implemented depth of field, and it seems to be the case that the only way you can make it look right is to do the following:

1) Render main scene (without any post process geometry - so no window effects here just regular scene as seen through them)

2) apply depth of field to blur the true scene image based on distance in the correct parts

3) THEN render post process geometry eg, frosted glass into the windows. The actual glass geometry wont be blurred, giving crisp divisions due to the glass surface, as should be the case, but the image seen through the glass will be blurred correctly.

... however, if the glass ITSELF is to be blurred too (since the actual glass pane lies at a blurred depth).. then i guess it look wrong.. lol...
Then of course you could render the scene with the post process geometry (no depth writes) and apply the DOF last... to ensure the actual window (e.g. glass) is ALSO blurred)... Although thinking about it.. the glass would be blurred according to the depth of the objects seen through the window... since it has no depth write itself ... oh...

perhaps render the scene without the glass... Blur it according to depth, then do a seperate depth blur in a seperate texture for all post process geometry based on ITS depth in the scene, and map back on..
So youd have the image THROUGH the glass blurred correctly by its depth (and hence glass distortions distort the correctly blurred scene), but the details of the glass are blurred according to THEIR depth.. so you could have sharp window, with a blurred scene...

lol..

confusing..

definitely interested in hearing any suggestions!


at the moment i just apply depth of field last over everything (including post process geometry, like glass windows, or World space oriented heat haze quads). If your looking hard it doesnt look quite right i guess, but since everything does it the same way, your brain can actually convince you it looks normal for the most part.
That single glass layer case with DOF is harder than I first thought.

If the glass is at a blurred depth but does not contain any ripples then it should not add any blur to the refracted component.

The only way I can think to do it correctly would be to render the background, the window, and the window's x-y offsets to three separate textures and apply DOF to them separately. This is HIDEOUS!

- Render the opaque background and its depth to textures.
- Render the window colour to a new texture, but clipped by the z-values of the opaque background. (omit this texture if the window has no colour)
- Render the window again to another texture, also clipped, this time writing out x-y refraction offsets. Store x^2 and y^2 in z and w so we can do variance type maths later.
- Render the window depth to a texture.
- Apply DOF to the background, the window colour and the window offsets separately.
- compose these three to a new texture using the DOF-ed offsets and their variances to sample the DOF-ed background, and overlaying the DOF-ed window colour.

..I dont think its worth it :)
DOF with glass is very simple, if your alpha channel is free, you should store eye space distance per vertex (per pixel is more accurate)in the alpha channel, and do your DOF as a post process using that depth value. This way DOF works uniformly on all objects, since you manually output the objects eye space depth in a floating point buffer, I'm assuming your doing BLOOM with HDR. DOF comes at very little cost with DOF/BLOOM, you can just use the BLOOM Blurred image as the DOF image.

Particles are a whole new story, you'll have to render them after all objects have been rendered using already discussed techniques you might find online. I'd suggest soft particles.

I have found no compromises for DOF, refraction and particles. However I've had to compromise on other techniques, but not these that you have mentioned.

One last note, if your alpha channel isn't free, you can use another rendertarget/color attachment.

You don't need to compromise on any of the mentioned effects.
Quote:Original post by dmonte
..
Particles are a whole new story, you'll have to render them after all objects have been rendered using already discussed techniques you might find online. I'd suggest soft particles.

I have found no compromises for DOF, refraction and particles. However I've had to compromise on other techniques, but not these that you have mentioned.
..
You don't need to compromise on any of the mentioned effects.


Let me clarify, I am talking about compromises when combining these techniques.

Take soft particles with DOF. What depth do you compare against for the soft particle? You should really be comparing against some sort of fuzzy depth value (variance or PCF?) or you get incorrect hard clipping even with soft particles.

This hard edge would happen if a soft particle is poking out from behind an opaque object. Without considering DOF, this is correct behavior. The problem is that with DOF you still get this hard edge even though it is no longer correct. A blurred edge of an opaque object should fuzzily overlap anything (such as a particle or other transparent object) in the background.
No, still there isn't an issue, I mentioned to output eye space distance manually and use that for calculating your DOF. You will find that the DOF can be applied uniformly over all these objects. Combining them poses no issue at all, once you manually output depth in eye space.

Edit
Here are some screens demonstrating refraction + DOF also:

This image shows dof affecting a refractive surface over an opaque one, i'll be going closer to the refractive surface in the next image.
DOF far


This image shows the refractive surface up close over the opaque.
DOF-lesser near

You can easily support particles also. (soft also, I have in my engine). Just use eye space distance manually. (Ignore the artifact, I was in the process of making a highlighting tool when I took these screens.)

[Edited by - dmonte on February 12, 2009 2:33:42 AM]
Frankly I dont understand what dmonte is saying; depth is depth, no matter how you output it or what channel you store it in; if a window is opaque it writes depth only of itself, not what is behind. If it 100% transparent it doesnt write its own depth but only what is behind; either way you can't mix depths or you get errors. Am i missing something?

I can see allowing some depth mixing for something like a particle, they are small and the errors might not me so noticeable. But for semi-transparent windows I think you need a special case solution, like multiple depth textures and compositing; basically just avoid this case if you can.
What I'm saying is, you should manually output eye space distance. Kindly look at the images in my previous post. I have edited it to show that it works. If you manually output depth, even transparent objects will have true depth.
What do you mean by "manually output eye space distance"? (i can't tell exactly what your screenshots represent for some reason)
I'll give you an example,

void vertex_program(float4 position : POSITION,
float2 texcoord: TEXCOORD0,


out float4 oPos: POSITION,
out float3 oTexcoord: TEXCOORD0)
{
oPos = mul(mvp, position);

float4 eyePos = mul(modelview, position);

oTexcoord.xyz = float3(texcoord, length(eyePos.xyz));

}

void frag_program(float3 texcoord: TEXCOORD0,
sampler2D samp: TEXUNIT0

out float4 color: COLOR0)
{
color = float4(tex2D(samp, texcoord.xy).xyz, texcoord.z);
}

and then use the eye space length, in your post process DOF shader.

Basically, you'll have to change the depth output based on material. Manually modify the depth value when necessary. Painful but should work.
This depth, is not the actual depth, but what is fed to the DOF shader.

Edit: The screen shots, represent a refractive transparent surface over an opaque surface having DOF applied on them.

[Edited by - dmonte on February 12, 2009 1:28:14 PM]

This topic is closed to new replies.

Advertisement