DoF - near field bleeding

Started by
12 comments, last by ChenMo 6 years, 7 months ago

I decided to implement bokeh depth of field algorithm from here http://www.crytek.com/download/Sousa_Graphics_Gems_CryENGINE3.pdf

Actually, I did the far field part and it works really nice and fast. What I have big problems with though is doing the near field bleeding part (slide 43). To be honest, just reading these slides I don't see how the author could achieve this effect withour any sort of blurring of near field CoC. Or maybe he did that but didn't mention about it.

Have a look here (yeah, I know; I'm an art master):

[attachment=30525:img.jpg]

Say to the left we have out of focus region that is close to the camera and should bleed onto the background. Now, in slide 44 the author says that near field CoC should be used for blending. However, when processing pixels belonging to the background using this value will fill the entire background with the unblurred version of the color buffer, and any bleeding created in the DOF pass won't show up.

Any ideas on how to properly bleed the near field colors are very welcome.

Advertisement

It's your lucky day, since I happened to work on the exact shader you're talking about. ;)

What you said is exactly right, we blur the near field CoC. We store the CoC in the alpha channel which gets blurred along with the color. We blend the sharp color with the far (using sharp CoC) and then the near field over that result.

A few important things to take note of:

  • Clamp the near field alpha before the blurring process (saturate(nearField.a)). If you don't, your near field bleeding will turn into a big blob of solid color and will look awful. This isn't mentioned in the paper (it's something I fixed after the fact) but you've likely seen it in a lot of Sprite DOF implementations.
  • Make sure you're pre-multiplying your color with the CoC as mentioned in the paper (you can/probably should do this on the near field too). Pics in the paper as to why this is important.
  • Don't forget to undo the above in the final blend pass just before blending so that your colors are restored to their original brightness (rgb /= alpha). Either check for 0 and don't divide, or make sure it can't be 0. Otherwise you'll probably get some darkening issues around focus boundaries in places.
  • "Scatter as gather" refers to the motion blur section of the paper, if it wasn't clear. Basically it means we reject samples using the min tile CoC like we do in motion blur. We only do this on the near field. The far field is tested against the full CoC.
  • If your blur amount/CoC is too weak, then you won't have enough "range" in your alpha to properly blend, causing a "soft focus" look. You can try to do some image balancing to bring the brightest point up to 1 to mitigate this and have a proper 0-1 blend range/mask, but we left this out because it's not really too important... and you need to figure out what the brightest pixel in the CoC map is in order to actually do this kind of compensation. Though, if you're really hellbent on it, you could probably merge it with an auto-exposure luminance downscaling shader if you have that.

Hope that helps. smile.png

Regarding this:


"Scatter as gather" refers to the motion blur section of the paper, if it wasn't clear. Basically it means we reject samples using the min tile CoC like we do in motion blur. We only do this on the near field. The far field is tested against the full CoC.

1. You meant "using the MAX tile"? From what I understand, for each tile we find max of near CoC values.

2. Let's say we're processing a pixel that has near CoC = 1 and is near the edge with the background (a pixel on the left hand side of the black line on my picture, near the line). Are we processing all samples in the kernel in this case, average them and output the result?

3. Let's say we're now processing a background pixel with CoC = 0. What samples should I use for this pixel and what weights should they have? From what I understand I defnitely need samples with CoC = 0.0 (the left side ones) but I also need the center sample, right?

1. You meant "using the MAX tile"?

Sorry, yeah, I meant the max tile. smile.png

2. Let's say we're processing a pixel that has near CoC = 1 and is near the edge with the background (a pixel on the left hand side of the black line on my picture, near the line). Are we processing all samples in the kernel in this case, average them and output the result?

Yes, you always sample your entire kernel and then average them. Weighing is done in two ways (see below).

3. Let's say we're now processing a background pixel with CoC = 0. What samples should I use for this pixel and what weights should they have? From what I understand I defnitely need samples with CoC = 0.0 (the left side ones) but I also need the center sample, right?

Same as above.

Weighing is done in two ways. First, you pre-multiply the color by the CoC value, so that all of the colors you sample will have been weighed against your CoC already. Second you apply the max tile comparison during sampling. Looks something like:


sampleNearCoC >= maxCoC ? 1 : saturate(maxCoC - sampleNearCoc)

That only applies to near, far is just:


sampleFarCoc >= farCoC ? 1 : saturate(sampleFarCoc)

Okay, we're moving forward. But:

This:

sampleNearCoC >= maxCoC ? 1 : saturate(maxCoC - sampleNearCoc)

should be this:


sampleNearCoC >= maxCoC ? 1 : (1.0 - saturate(maxCoC - sampleNearCoc))

Also, I don't think that multiplying near field value by near field coc is correct because that will erase any colors from points with near coc == 0 that should have other pixels bleed onto them.

I think I have one last problem: how do I decide whether to write near field or far field into the buffer? It becomes very problematic for pixels that should be blurred by both fields.


Okay, we're moving forward. But:
This:
sampleNearCoC >= maxCoC ? 1 : saturate(maxCoC - sampleNearCoc)
should be this:
sampleNearCoC >= maxCoC ? 1 : (1.0 - saturate(maxCoC - sampleNearCoc))
YMMV, whatever works for you is fine. :) The second method kills the near field for me as it rejects most samples.


Also, I don't think that multiplying near field value by near field coc is correct because that will erase any colors from points with near coc == 0 that should have other pixels bleed onto them.
Nah, you're pre-multiplying the near field with the near CoC, not applying it after blurring. The amount a sample will contribute to a pixel depends on how strong that samples CoC is, so things closer to the camera bleed onto other pixels more. A pixel with CoC == 0 will still sample pixels that have a higher CoC and thus will still have pixels bleed onto them.


I think I have one last problem: how do I decide whether to write near field or far field into the buffer? It becomes very problematic for pixels that should be blurred by both fields.
For near field/far field, we output both to separate textures using MRT and blur them separately. We do this in a single shader, first blurring the near field (if the max tile has CoC) and then the far field. Then we composite both textures with the sharp image, starting with far and then near.

If you're not using multiple passes to increase the sample count as mentioned in the paper, then an alternative idea is to try the CoD approach which doesn't separate the near/far into different textures but performs a scatter-as-gather just like the McGuire motion blur approach. You can read about that here: http://advances.realtimerendering.com/s2014/sledgehammer/Next-Generation-Post-Processing-in-Call-of-Duty-Advanced-Warfare-v17.pptx

Okay, now I get the whole agorithm I think.

But to be honest before you posted your first answer I had figured out some slightly other way to solve my problem. My algorithm differs in that instead of processing near and far fields in one pass and output that to two separate buffers I first ran far field pass and then, on that buffer, I applied the near field. I also do the near COC blurring in separate passes (my fields' buffers don't have alpha - R11G11B10F).

I have the whole demo here: http://maxest.gct-game.net/stuff/wsterna-dof.zip

It contains both exe as well as project files with dependencies needed to compile the project from the ground up.

Nevertheless, thank you a lot for your comments because I think I will go for the original Sousa's solution. First to compare and second I think it will yield better results than my current implementation.

Awesome. :D Sounds like a good experiment. Thanks for sharing the demo too, looks good. :D

Glad you like it smile.png.

I forgot about mentioning two things:

1. Controls:
WSAD + mouse

LSHIFT to speed up

INSERT / DELETE - focus plane distance

HOME / END - transition area size

PAGE UP/DOWN - dof strength

ESC - exit

F4 - recompile shaders

2. When you exit with ESC the app will dump file profiler.txt with measurements. Each measurement is taken as an avg. from a 1000 of consecutive frames.

Sorry for bothering you again but I decided to try Sousa's original approach and have difficulties once more. Which is strange because I'm pretty sure that when I was experimenting back when we were posting here I thought I had this working.

Nevertheless, I simply have problems with applying the near field over my sharp image. Here is a picture of my near field render target:

[attachment=30742:img1.jpg]

Here's the sharp picture:

[attachment=30743:img2.jpg]

And finally the sharp picture mul'ed by (1-cocNear):

[attachment=30744:img3.jpg]

I suppose I should somehow blend the first picture onto the third picture but no matter I do I get very poor results.

This topic is closed to new replies.

Advertisement