Jump to content
Posted 18 November 2011 - 01:25 AM
Posted 18 November 2011 - 01:52 AM
Posted 22 November 2011 - 01:33 AM
Posted 22 November 2011 - 02:03 AM
Posted 22 November 2011 - 03:41 AM
Posted 23 November 2011 - 01:32 AM
Sure, you're right. My current approach is only 1 layer of transparency, but I already thought about extending it to 3 layers. After getting rid of high-frequency normal mapped surfaces, inferred rendering could be interesting again, thought I a little bit afraid, that the spliiting/gathering could be a little bit too expensive.
You already use the deferred-transparency via stippling trick for your water, right? That (or something closer to inferred rendering) might be of use here too. If you write the particle's alpha out to the G-buffer, you could still do soft-particles, and a 2x2 stipple pattern could give you 1 opaque layer + 3 layers of transparency. Might get a bit hairy with lots of overlapping particles though, unless you did the inferred 2-pass rending technique.
In my particle system the CPU is only involved during creation and destruction time of a particle. All the movement, color/size changing etc. is done on the GPU. The average light sum is in my context a really bad approximation and I want to avoid to touch each particle every x frames. Nevertheless, the idea might work in an other context
You have a particle system, that renders batched particles. You draw them in one call I guess, and you need to lit every single one of them separately, based on their distance from the average lights sum in the current scene ?
Can you store (on CPU side) an attribute in every particle vertex, how close it is from the average sum of scene lights, so you can adjust the brightness of this individual particle in the pixel shader...
Maybe you can skip the CPU side and pass to the particle vertex shader the average light position and (color, why not) and then for each processed particle vertex pass to the pixel shader it's distance from the average lighting.
Posted 23 November 2011 - 01:17 PM
Posted 24 November 2011 - 06:18 AM
It sounds like light propagation volumes (crytek). It need to be a 3d grid with "relative" high resolution, that is the number of light probes will climb up quite fast. An other problem would be 8 texture access for each particle, which is quite high. Nevertheless, I could imagine a very high quality from it, but it is just too expensive for my current goal.
what about this:
* overlaying a grid of diffuse light probes (using spherical harmonics or an approximation) in the area where your particles / fog will be
* then each frame, compute the diffuse light that reaches each light probe from all of the point lights
* when drawing your particles / fog, just sample from the nearest light probe (or interpolate between the nearest 8)
This would be fast because the number of light probes would be much smaller than the number of particles.
So the number of lighting calculations will be O(number of light probes * number of lights).
Sampling the lighting for each particles is a O(1) look-up.
You could probably even compute the light probes on the CPU...
Posted 25 November 2011 - 01:35 AM