DOF and particles etc

Started by
10 comments, last by zedz 14 years, 11 months ago
last time I done DOF, I didnt apply DOF effects to my particles, due to them not having depth info. This looked better than the alternative of using whatever depth value is in the buffer behind the particle. Now what method are other games using? Is anyone know of someone doing proper DOF with particles, i.e. is there a reasonably cheap method to do DOF with them (cheap as in not using depthpeeling etc)
Advertisement
One trick is to compute the depth-of-field blur factor and output it into the alpha channel. This unfortunately means rendering out your particles in two passes in order to have them affect the depth of field, but it does work.
ta the Exorcist
I actually store blur factor in thew alpha channel already.
but I dont understand how what u mean, do u mean rendering the DOF of the particles seperately?
yes I suppose that would work, but that means having to do an expensive blur filter twice
I think what Exorcist means is, if you're rendering your particles with z writes turned off, you will need to render them a second time with z writes turned on to write into the depth buffer used for your DOF effect.

That makes sense to me, but I've never implemented DOF, let alone DOF for particles ;) so I guess we'll have to see.
[size="1"]
thats not gonna work since the particles z values will overwrite the scenery behind its z values (only one z value per pixel) thus DOF with the scenery behind will look wrong as it will be given the particles DOF value
many games already render particles to a separate lower res buffer... the simplest solution that i know of is to apply dof to that separate buffer before re-applying it over the main render.

benefits being that you get DOF on the pixels behind particles as well as on the particles themselves AND the particle fill rate cost is reduced to 1/4, balanced off by the application of a full screen pass to re-apply them.

note - my experience is 100% console (360/Ps3)
I see how that can work for one layer of particles in front of an opaque background, but what about many layers of particles? If you render out particle z-values, you'd only get the correct blur factor for the first layer of particles. If there's an almost transparent particle in front of an almost opaque smoke cloud, the smoke cloud would get the wrong blur factor, which will probably look very bad.

One idea that jumps to my mind: Could you write the z-values weighted by the opacity of the particles opacity? This would solve the above issue since the z-buffer would contain (small value) * (z of close almost transparent particle) + (1 - small value) * (z of smoke cloud). Of course, there might be situations where this looks bad, but maybe it looks good enough in practice?
the only "real" solution to your problem is depth peeling.
there are various faked methods but they all have pretty huge edge cases.... most games opt for NOT applying DOF to particles.
sorry zedz, communication breakdown, and a lot of people are completely missing the point.

It has *nothing* to do with writing to the depth buffer, or writing the Z distance to the alpha buffer. Its storing the blur kernel-size/strength in the alpha channel. Instead of working out the amount of blur in the FOV shader, work it out in the forward render pass, and keep the blur strength in the alpha channel.

If you are already writing the blur strength out to alpha, i'm assuming you have set your mask so that the particles *do not* write out to that channel?

So what you do in that case is a second pass of the particles that *do* write to the alpha channel, but *not* to the colour channels. Use this to modify the blur mask by writing out the blur-factor computed in the pixel shader.

Because it has nothing to do with depth itself, and the blur strength *is* blendable, for all intents and purposes, it does not require depth peeling. Its far from 100% accurate (which cant be done short of perfect sorting order, etc) but it does help substantially.


You will need to be tricky with your blendstates, and possibly use a duplicate of that channel copied into its own texture for read/write reasons depending on what hardware architecture you are aiming for.
Interesting. If the blur strength is linear in z, i.e. blurStrength = blurStrengthFactor * z, then Exorcist's method is equivalent to writing the blended z to a depth buffer and computing the blur strength from the blended z (see my former post):

blur strength from blended z
= blurStrengthFactor * (opacity_1 * z_1 + opacity_2 + z_2)
= blurStrengthFactor * opacity_1 * z_1 + opacity_2 * blurStrengthFactor * z_2
= opacity_1 * (blurStrengthFactor * z_1) + opacity_2 * (blurStrengthFactor * z_2)
= opacity_1 * blurFactor_1 + opacity_2 * blurFactor_2
= Exorcist's method

Now the blend strength is hardly linear in z in practice, so Exorcist's method is definitely the better method. However, the linearity might hold for a large z-range, so it might be worthy as an approximation.

This topic is closed to new replies.

Advertisement