Need Help: blood particles in an unlit environment

Started by
7 comments, last by David Neubelt 12 years, 5 months ago
Well, I've an issue with lighting particles in a deferred shader. With lot of lights and alpha blended particles, (fake) lighting this particles in a proper way is difficult.

For light emitting particles (fire) or particles always close to light sources (smoke) it is manageable, but once I got particles in dark or light areas (blood, fog) I got a problem.

In dark areas the particles are just too bright, in lit areas they are too dark. Without alpha blending I could utilize the deferred lighting pipeline, but I want to use atleast soft particles, which needs alpha blending for proper integration. Any tips or tricks ? It must not be a perfect, realistic solution, a little fake would help too.
Advertisement
Traditional forward lighting techniques?

e.g. Find the most influential light sources in the area surrounding the particle system (cpu side), consolidate them into a single area light source for use in the particle shader.
Thx for the tip Hodgman, I've implemented several particle techniques:
- forward rendered unlit additive/alpha blended particles
- forward rendered fake lit (based on ambient and active light sources)/alpha blended particles
- standard deferred lighting pipeline

In my context (dark environment with several hundred point light sources, no globle light source), I'm not getting a very homogenous light setting, so none of the forward rendered approaches works very good in changing light settings. The particle system takes a fire'n'forget approach and the camera is the center of the particle and lighting system. I.e. I have 10 light sources on the right side , where on the left are none. The average fake light value would be too bright for particles on the left side.

The only really working way is the standard deferred lighting pipeline, making alpha blending/soft partilces almost impossible :(
You already use the deferred-transparency via stippling trick for your water, right? That (or something closer to inferred rendering) might be of use here too. If you write the particle's alpha out to the G-buffer, you could still do soft-particles, and a 2x2 stipple pattern could give you 1 opaque layer + 3 layers of transparency. Might get a bit hairy with lots of overlapping particles though, unless you did the inferred 2-pass rending technique.
You have a particle system, that renders batched particles. You draw them in one call I guess, and you need to lit every single one of them separately, based on their distance from the average lights sum in the current scene ?
Something like this (excuse my photoshop skills)
[attachment=6188:lit.jpg]
As opposed to this.
[attachment=6189:not_lit.jpg]

Can you store (on CPU side) an attribute in every particle vertex, how close it is from the average sum of scene lights, so you can adjust the brightness of this individual particle in the pixel shader...
Maybe you can skip the CPU side and pass to the particle vertex shader the average light position and (color, why not) and then for each processed particle vertex pass to the pixel shader it's distance from the average lighting.

You already use the deferred-transparency via stippling trick for your water, right? That (or something closer to inferred rendering) might be of use here too. If you write the particle's alpha out to the G-buffer, you could still do soft-particles, and a 2x2 stipple pattern could give you 1 opaque layer + 3 layers of transparency. Might get a bit hairy with lots of overlapping particles though, unless you did the inferred 2-pass rending technique.

Sure, you're right. My current approach is only 1 layer of transparency, but I already thought about extending it to 3 layers. After getting rid of high-frequency normal mapped surfaces, inferred rendering could be interesting again, thought I a little bit afraid, that the spliiting/gathering could be a little bit too expensive.
Several options I need to analyse further, thx for pushing me in the right direction :D



You have a particle system, that renders batched particles. You draw them in one call I guess, and you need to lit every single one of them separately, based on their distance from the average lights sum in the current scene ?

Can you store (on CPU side) an attribute in every particle vertex, how close it is from the average sum of scene lights, so you can adjust the brightness of this individual particle in the pixel shader...
Maybe you can skip the CPU side and pass to the particle vertex shader the average light position and (color, why not) and then for each processed particle vertex pass to the pixel shader it's distance from the average lighting.

In my particle system the CPU is only involved during creation and destruction time of a particle. All the movement, color/size changing etc. is done on the GPU. The average light sum is in my context a really bad approximation and I want to avoid to touch each particle every x frames. Nevertheless, the idea might work in an other context :D
what about this:

* overlaying a grid of diffuse light probes (using spherical harmonics or an approximation) in the area where your particles / fog will be
* then each frame, compute the diffuse light that reaches each light probe from all of the point lights
* when drawing your particles / fog, just sample from the nearest light probe (or interpolate between the nearest 8)

This would be fast because the number of light probes would be much smaller than the number of particles.
So the number of lighting calculations will be O(number of light probes * number of lights).
Sampling the lighting for each particles is a O(1) look-up.

You could probably even compute the light probes on the CPU...



what about this:

* overlaying a grid of diffuse light probes (using spherical harmonics or an approximation) in the area where your particles / fog will be
* then each frame, compute the diffuse light that reaches each light probe from all of the point lights
* when drawing your particles / fog, just sample from the nearest light probe (or interpolate between the nearest 8)

This would be fast because the number of light probes would be much smaller than the number of particles.
So the number of lighting calculations will be O(number of light probes * number of lights).
Sampling the lighting for each particles is a O(1) look-up.

You could probably even compute the light probes on the CPU...

It sounds like light propagation volumes (crytek). It need to be a 3d grid with "relative" high resolution, that is the number of light probes will climb up quite fast. An other problem would be 8 texture access for each particle, which is quite high. Nevertheless, I could imagine a very high quality from it, but it is just too expensive for my current goal. :)
I also recommend light probes. Really, you don't need SH and you could just use the DC term of SH (which is ambient).

Diffuse lighting is low frequency so you don't need a high amount to get good looking results. You can store in a 3d cubemap and turn on filtering to get one texture access or per particle batch you can submit 8 as a constant uniform and do the interpolation yourself in the shader. It's not going to be very expensive.

-= Dave
Graphics Programmer - Ready At Dawn Studios

This topic is closed to new replies.

Advertisement