Particle Lighting 101

Started by
5 comments, last by racarate 7 years, 9 months ago

I've never used anything but self-lit particles... I'm going through this talk here and there a lot of references to lighting the particles to fit the scene:

http://www.slideshare.net/guerrillagames/the-production-and-visual-fx-of-killzone-shadow-fall

Also, later on they mention baking out the normals from their Houdini smoke sim.

I'm curious if there is a simpler place to start with particle lighting. For example, what is the easiest possible way to make the lighting of particles fit a scene? Everything I find is either self-lit or way too advanced. This isn't for a particular project, I'm just trying to learn. Specifically, I think I am missing the intuition behind how a particle has a normal.

Nick

Advertisement

I would think a particles "faces" all directions at once since it is supposed to be an extremely small....well particle.

So if I were lighting a particle from external light sources, I would figure out the strength (color) of the light at the particle point and then apply it to the whole particle. Blending multiple lights if necessary.

The method they use for in that game is simpler than they let on. The normals they baked out are just for the sprites to give them more volume for lighting.

All they did was take sample points from each particle (generated on the CPU logic)and upload them. These sample points are just imaginary vertexes in a sudivided grid based on the particles.

After each light they would sample these points in the lighting phase. And treat them like a normal gbuffer. Though I am not sure what was used to do the particles gbufcwr due to limitations

The simplest method though is to just render each particle via forward. But it's expensive

Ok, I have a follow-up question. Most of what I've found so far is just making up spherical-ish normal maps for the particles. That seems odd to me, but I'm giving it a shot. In this paper, this normal mapping of particles (seems to) depend on reprojecting to the so-called HL2 basis:

Practical Particle Lighting - Roxlu

If I'm understanding basis projection correctly, he is only doing this as a way of summing up the the scene's light. I could ignore this step and just do regular normal mapping by looping through my analytical lights or looping through my environment cubes, right? What is the point of this HL2 projection in this case?

I don't know what HL2 is, I can't find any information on the subject except for Half Life 2. Mind you that I am shit with abbreviations.

The most commons topics a google search gives me is along the lines of a sparse grid representation. So it's likely to do with indirect illumination.

It looks like HL2 does mean Half Life 2. They seem to use a particular orthonormal basis in their lighting computations. The slide "Resurrecting an old friend: HL2 basis" has the exact vectors. You can also find them here: http://www.valvesoftware.com/publications/2006/SIGGRAPH06_Course_ShadingInValvesSourceEngine.pdf

Sorry, one more quick question. In the linked article they specify a "quick and dirty" normal map generation formula to simulate a spherical particle:


// billboard_normal == -view_direction

half3 n = lerp(billboard_normal, normalize(corner-center), curvature_amount);

In this example, where would I get the center and corner of my particles? I am using OpenGL point sprites. I think they are suggesting to compute this lighting in the vertex shader, so I can't use my knowledge of the final rasterized size (via gl_PointSize) to figure this out. Any ideas?

Nick

This topic is closed to new replies.

Advertisement