# DoF - near field bleeding

## Recommended Posts

I decided to implement bokeh depth of field algorithm from here http://www.crytek.com/download/Sousa_Graphics_Gems_CryENGINE3.pdf

Actually, I did the far field part and it works really nice and fast. What I have big problems with though is doing the near field bleeding part (slide 43). To be honest, just reading these slides I don't see how the author could achieve this effect withour any sort of blurring of near field CoC. Or maybe he did that but didn't mention about it.

Have a look here (yeah, I know; I'm an art master):

[attachment=30525:img.jpg]

Say to the left we have out of focus region that is close to the camera and should bleed onto the background. Now, in slide 44 the author says that near field CoC should be used for blending. However, when processing pixels belonging to the background using this value will fill the entire background with the unblurred version of the color buffer, and any bleeding created in the DOF pass won't show up.

Any ideas on how to properly bleed the near field colors are very welcome.

Edited by maxest

##### Share on other sites

Regarding this:

"Scatter as gather" refers to the motion blur section of the paper, if it wasn't clear. Basically it means we reject samples using the min tile CoC like we do in motion blur. We only do this on the near field. The far field is tested against the full CoC.

1. You meant "using the MAX tile"? From what I understand, for each tile we find max of near CoC values.

2. Let's say we're processing a pixel that has near CoC = 1 and is near the edge with the background (a pixel on the left hand side of the black line on my picture, near the line). Are we processing all samples in the kernel in this case, average them and output the result?

3. Let's say we're now processing a background pixel with CoC = 0. What samples should I use for this pixel and what weights should they have? From what I understand I defnitely need samples with CoC = 0.0 (the left side ones) but I also need the center sample, right?

##### Share on other sites

1. You meant "using the MAX tile"?

Sorry, yeah, I meant the max tile.

2. Let's say we're processing a pixel that has near CoC = 1 and is near the edge with the background (a pixel on the left hand side of the black line on my picture, near the line). Are we processing all samples in the kernel in this case, average them and output the result?

Yes, you always sample your entire kernel and then average them. Weighing is done in two ways (see below).

3. Let's say we're now processing a background pixel with CoC = 0. What samples should I use for this pixel and what weights should they have? From what I understand I defnitely need samples with CoC = 0.0 (the left side ones) but I also need the center sample, right?

Same as above.

Weighing is done in two ways. First, you pre-multiply the color by the CoC value, so that all of the colors you sample will have been weighed against your CoC already. Second you apply the max tile comparison during sampling. Looks something like:

sampleNearCoC >= maxCoC ? 1 : saturate(maxCoC - sampleNearCoc)

That only applies to near, far is just:

sampleFarCoc >= farCoC ? 1 : saturate(sampleFarCoc)
Edited by Styves

##### Share on other sites

Okay, we're moving forward. But:

This:

sampleNearCoC >= maxCoC ? 1 : saturate(maxCoC - sampleNearCoc)


should be this:

sampleNearCoC >= maxCoC ? 1 : (1.0 - saturate(maxCoC - sampleNearCoc))


Also, I don't think that multiplying near field value by near field coc is correct because that will erase any colors from points with near coc == 0 that should have other pixels bleed onto them.

I think I have one last problem: how do I decide whether to write near field or far field into the buffer? It becomes very problematic for pixels that should be blurred by both fields.

##### Share on other sites

Okay, now I get the whole agorithm I think.

But to be honest before you posted your first answer I had figured out some slightly other way to solve my problem. My algorithm differs in that instead of processing near and far fields in one pass and output that to two separate buffers I first ran far field pass and then, on that buffer, I applied the near field. I also do the near COC blurring in separate passes (my fields' buffers don't have alpha - R11G11B10F).

I have the whole demo here: http://maxest.gct-game.net/stuff/wsterna-dof.zip

It contains both exe as well as project files with dependencies needed to compile the project from the ground up.

Nevertheless, thank you a lot for your comments because I think I will go for the original Sousa's solution. First to compare and second I think it will yield better results than my current implementation.

##### Share on other sites

Awesome. :D Sounds like a good experiment. Thanks for sharing the demo too, looks good. :D

##### Share on other sites

I forgot about mentioning two things:

1. Controls:

LSHIFT to speed up

INSERT / DELETE - focus plane distance

HOME / END - transition area size

PAGE UP/DOWN - dof strength

ESC - exit

2. When you exit with ESC the app will dump file profiler.txt with measurements. Each measurement is taken as an avg. from a 1000 of consecutive frames.

Edited by maxest

##### Share on other sites

Sorry for bothering you again but I decided to try Sousa's original approach and have difficulties once more. Which is strange because I'm pretty sure that when I was experimenting back when we were posting here I thought I had this working.

Nevertheless, I simply have problems with applying the near field over my sharp image. Here is a picture of my near field render target:

[attachment=30742:img1.jpg]

Here's the sharp picture:

[attachment=30743:img2.jpg]

And finally the sharp picture mul'ed by (1-cocNear):

[attachment=30744:img3.jpg]

I suppose I should somehow blend the first picture onto the third picture but no matter I do I get very poor results.

##### Share on other sites

You don't need to premul the sharp image by 1-cocNear. :) You just need to premul far and near.

What you want is to blend from back to front, something like this:

float4 sharp = tex2D(sharpTex, coord);
float4 near = tex2D(nearTex, coord);
float4 far = tex2D(farTex, coord);

// undo premultiplication
far.rgb /= far.a > 0 ? far.a : 1.0; // catching division by 0
far.a = saturate(far.a);

near.rgb /= near.a > 0 ? near.a : 1.0; // catching division by 0
near.a = saturate(near.a); // you can multiply by some value to avoid the "soft focus" effect at low CoC values

// compute sharp far CoC at full res (in case farTex is half res)
float depth...
float farCoC...

// composite, starting from back to front
float4 final = lerp(sharp, far, farCoC); // SHARP far CoC, not blurred, can be stored in sharp or far alpha if those are full res
final = lerp(final, near, near.a); // near.a is BLURRED near CoC

##### Share on other sites

Hm, I think the key here was:

near.a = saturate(near.a); // you can multiply by some value to avoid the "soft focus" effect at low CoC values


[attachment=30785:1.jpg]

After doing:

near.a = saturate(2.0 * near.a)


I got this:

[attachment=30786:2.jpg]

But to be honest I still kind of prefer my old approach even though it's less correct :P. I actually have some small ideas on how to improve my first demo so I hope to have another demo with both solutions and be able to switch between one and the other.

##### Share on other sites

Nice work.

I have been worked on depth of field for a few days, and I studied the same paper of you.After reading what you two said, I have understood further. Thanks.

Edited by ChenMo

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628730
• Total Posts
2984423

• 25
• 11
• 10
• 16
• 14