# Realistic dusty particle clouds

This topic is 2300 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Ok, so how to make particles like these:
http://imageshack.us/f/44/metro203320100502141054.jpg/
http://www.geforce.com/Active/en_US/shared/images/xml_graphics/metro-last-light/screenshot-2.jpg

That means filling rooms with dusty clouds that can be litten by (multiple) lightsources, including shadowMapping. In my particular case, I need the fill the rooms with snowdust. I already tried a couple things myself. But main problems are low performance and blending difficulties. All in all, it just doesn't look that good. Anyway, what I tried:
 - Render a whole bunch of billboards (particle cloud) - Animate the particles a bit (slight sinus movement) - For each billboard, apply lighting on it as on any other normal surface. That can include using normalMaps to give them some more volume - Blend the particles either additive or with normal transparency into the scene 
Additive blend can give quite cool results, but quickly becomes overdone when too many particles overlap, or when a bright wall is behind them. Dust and smoke shouldn't be additive really. Normal transparency gives better results, but again it's difficult to dose as the amount of overlapping particles is unpredictable (they move a bit / camera moves around).

On top of that, the performance is pretty low due the overdraw. I could fix that for a bit by rendering the billboards in a smaller buffer and paste it in the final results later on. But still, it's a pain in the ass. Crysis already did some cool volumetric fog stuff in 2008 so maybe I'm doing something wrong.

Another issue is that I should (GPU)sort the billboards to make better use of it. Plus since the billboards are facing the camera typically, lightsources that come from asides will have less effect since the billboards aren't facing in it's direction. Some ("half-angle") techniques show how to rotate the quads towards the light to optimize the results, but in my case there can easily be multiple lights. Multipass rendering the whole particle cloud for each light becomes even slower I'm afraid.

But maybe there are other methods. I saw this for example, "Deferred Particles"
Instead of lighting each particle, they are first accumulated into a buffer. Then afterwards light the buffer, as you do with "Deferred lighting". Looks promising, and gives quite a speed gain. But... how to know wether a certain pixel is shaded or not by a shadowMap? Blending & storing depth or (world)positions isn't going to work really. I doubt if you can (somewhat) correctly shine a light beam through the middle of a cloud. Obviously, this paper is focussed on smoke which is much more dense than the dusty crap I'm looking for.

- Oh, quick question. Do you guys advise using a few but large particles (= less acurate shadows, but faster) or using a ton of smaller particles (=more accurate shadows, but you'll see the same sprite texture repeating everywhere, and slower maybe).

Cheers,
Rick

##### Share on other sites
Basic way,
get a group of smallish billboards with some basic physics (if hit something bounce)
Give them 2 textures, a Normal and a Diffuse, Preferably have alpha in the Diffuse.
Make a mini-GBuffer (probably wouldn't matter if it was smaller than the screen, seeing as dust clouds are... cloudy anyway)
Render geometry representing the rays you see in pic 2.
in the pixel shader of this geometry light using the mini-GBuffer (what method of lighting is up to you there)
Use this creatively and you can have colours attaches to the ray geometry, getting results like in pic1.
Job done.

EDIT: if the mini-GBuffer is smaller than the screen, you may want a full sized Z-buffer, to make sure you don't get particles over drawing foreground objects.

##### Share on other sites
Let's see if I get it straight. First you make a buffer(s) that contain normals/albedo. Then draw the lamp geometry (cones/spheres/cylinders) into another buffer. As input, it grabs the normals/albedo from the first step. That sounds like the method they use in the paper from the first post, a deferred-rendering approach.

That should certainly boost the performance. But... In my case (see the attached pic), the lamp goes through the center of the clouds. So some billboards are in front of the "lamp-geometry", some behind, and yet others are inside(litten). To make it more difficult, the lamp has a shadowmap. Notice the billboards behind the white-board are darker. This is not really possible with using the data from a buffer, cause you can only hold 1 position/depth value per pixel. I'm not sure wether Metro2033 or Crysis from the example actually do shadows within the particle clouds, but it's certainly an effect I like to have.

The screenshot does lighting on each individual billboard, but obviously that rapes the GPU (although I tried doing it on a smaller buffer, and that gave a small speed-win). Another problem is the facing of the billboards. This shot looks ok, but if you have, -for example- a rotating fan above the cloud, you can clearly see the billboard rectangle-shapes become visible. This is due A: lack of sorting, B: the particles move a bit, and C: the color-result also depends on how many boards are placed behind each other. Using more boards (with lower alpha values) helps masking this problem, but it becomes even slower of course.

I have the feeling I'm using the wrong technique, but I can't see how other tricks can achieve the same effects (at reasonable speed)...

##### Share on other sites
you got my concept a little off but regardless I've found a few flaws with what I proposed.
I've got an idea but its extremely over complicated, so it wouldn't be speed win at all.

All I can say is you should consider forward rendering techniques, drawing the billboards 1 at a time, I'm just thinking how to get rays looking good.
I'll keep thinking if I get something I'll let you know
in the mean time, good luck.

##### Share on other sites
Have you considered not using billboards but using geometry as the smoke/dust volume. You could have two volume objects, one inside the other with animated alpha-blended textures. If you use the pixel shader to blend the texture to fully transparent where the normal of each face becomes perpendicular to the view normal it would cause a nice blended effect at the edges and shadows/lighting etc would work just as a normal object. Plus all done in one pass. You could animate the geometry too using the vertex shader with some nice random effects.

Never tried it but it might work... It would mimic a real dust cloud more accurately too.

##### Share on other sites
For distant clouds that works I think. But in my case you travel through the fog, dust, gas, farts, or whatever it is. So once you step inside the geometry, you suddenly loose a layer. With just a few meshes inside each other, that would cause some serious "popping" effects. Plus lightbeam that travel through the core of a cloud will be ignored, cause there is no geometry there....

Cheers

##### Share on other sites
When you transition inside the cloud can't you just add a mini-skybox around the camera using the same textures as the cloud?

What do you mean by the lightbeam thing?

##### Share on other sites
by light beam, he means rays of light. Like god-rays.
Also just a thought, can't us use the "particle buffer" as an alpha mask for rays of light (if you draw them as geometry)
all you'd need is a way of calculating where the rays would be and what colour and intensity they are.

##### Share on other sites
Sorry, yes I know he meant a ray of light, I just wasn't sure of the issue... It depends what he wants the image to look - if you draw the godray first and then draw the dust cloud over it, what would be the issue? Should the godray that comes out of the cloud be less intense?

##### Share on other sites
Hmmm, I could render the lights as volumes to a buffer first indeed. For beams(rays) that use shadowMaps such as the spotlight in the shot above, I could make the ray less dense or stop completely when to get occluded. Then next render the particles and add brightness to them using that buffer. I already have such a buffer by the way.

However... it still has issues. If the particle cloud is completely in front of the beam-volume, it shouldn't be litten by it. Neither if it's completely behind the beam. If the beam is traveling through the cloud, the end result should depend on how much particles are in front of the beam. The more, the less you'll see of it. The problem is that a 2D image with a pre-rendered shaft does not contain volumetric info. So you'll still have to test if the particle-pixels intersect the beam-geometry...

For small particle-clouds it doesn't matter that much. But think about a room filled with smoke or something, and a couple of flashlights are moving through the cloud...

But maybe I'm thinking too difficult, and are games using simpler methods for the sake of performance?