Hypothesizing a new lighting method.

Started by
17 comments, last by MJP 10 years, 3 months ago

I'd like to also put it on record that shadow mapping with deferred shading is no harder than it is with forward shading, and in fact many games use a very simple deferred approach for shadows (Crysis 1 for example) rather than applying shadows in the forward pass.

Advertisement

This may only work at all if you sample "cones" rather than reflection vectors, and only for lights at infinity. Surfaces are not perfect mirrors, so one point on the screen that you look at does not correspond to exactly one point in the "sky" (i.e. the cubemap). It corresponds to an area. Unless you come up with something very clever (something like a distance map with a direction vector to the closest "cardinal light" might actually be an idea...) you will need to do a lot of samples so it looks kind of good -- and then it's cheaper to just do normal forward rendering and evaluating all lights as usual.

Also, if a light is not "at infinity", let's say 5 meters above you, then an object which is at your position and an object which is 5 meters away will get light from the same angle and with the same intensity. Which is... just wrong, and just looks very bad.

Also I'm afraid that you are trying to solve the wrong problem. First of all, forward shading doesn't look better than deferred shading. It will usually look somewhat different, but not necessarily "better" as such. The one big advantage of forward shading is that transparency is a no-brainer (and also antialiasing is easier to get working), but other than that there is no reason why correctly implemented deferred shading should look any worse. On the contrary, you can get a much more consistent lighting and put a lot more shader work into every pixel.

On the other hand, deferred shading is not faster. It is more scaleable in respect of geometry and lights (in particular shadow-casting lights) at the expense of higher memory and bandwidth demands. Deferred shading has a complexity of numbers of lights multiplied with [at most] the resolution plus a more-or-less constant setup, instead of numbers of lights multiplied with number of vertices. Also, deferred shading (if no such thing as clustered DS is done) needs one shadow map at a time, for the one light that is currently being shaded whereas forward shading needs shadow maps for all lights that may affect any of the drawn geometry at once.

First of all, forward shading doesn't look better than deferred shading. It will usually look somewhat different, but not necessarily "better" as such. The one big advantage of forward shading is that transparency is a no-brainer (and also antialiasing is easier to get working), but other than that there is no reason why correctly implemented deferred shading should look any worse. On the contrary, you can get a much more consistent lighting and put a lot more shader work into every pixel.

Most deferred renderers will sacrifice some precision with normals, position(depth reconstruction is not 100% accurate) and with other paramaters that can cause some quality loss on final image. Luckily this is mostly last gen games where this is visible.

This technique would work for the one point in the scene that you render as the center of the cube maps. When you render the lights into the cube map, you're doing it relative to this one point. So that one point would be lit correctly. Everything else would be skewed.

I used a technique on the Wii that's very similar to the one mentioned in the OP, to get lots of dynamic lights on it's crappy hardware... except, instead of global cube-maps for the scene, each object had it's own cube-map, so that directionality and attenuation worked properly (at least on a per-mesh granularity - not very good for large meshes). Also, instead of cube-maps, we used sphere-maps for simplicity.... and you can't just render a light into a single texel, you have to render a large diffused blob of light.

The lighting is obviously much more approximate than doing it traditionally per-pixel -- the surface normal is evaluated per-pixel, but the attenuation and direction to light are evaluated per mesh. This means that for small meshes, it's pretty good, but for large meshes, all lights start to look like directional lights.

The other down-side is that you can't use nice BRDF's like Blinn-Phong or anything...

In general, this is part of a family of techniques known as image based lighting, and yes, it's common to combine a bunch of non-primary lights into a cube-map, etc -- e.g. think of every pixel in your sky-dome as a small directional light. Using it as a replacement for primary lights, across an entire scene, is a bit less popular.

I used a technique on the Wii that's very similar to the one mentioned in the OP, to get lots of dynamic lights on it's crappy hardware... except, instead of global cube-maps for the scene, each object had it's own cube-map, so that directionality and attenuation worked properly (at least on a per-mesh granularity - not very good for large meshes). Also, instead of cube-maps, we used sphere-maps for simplicity.... and you can't just render a light into a single texel, you have to render a large diffused blob of light.

The lighting is obviously much more approximate than doing it traditionally per-pixel -- the surface normal is evaluated per-pixel, but the attenuation and direction to light are evaluated per mesh. This means that for small meshes, it's pretty good, but for large meshes, all lights start to look like directional lights.

The other down-side is that you can't use nice BRDF's like Blinn-Phong or anything...

In general, this is part of a family of techniques known as image based lighting, and yes, it's common to combine a bunch of non-primary lights into a cube-map, etc -- e.g. think of every pixel in your sky-dome as a small directional light. Using it as a replacement for primary lights, across an entire scene, is a bit less popular.

For kinghunt

I did very similar trick but instead of sphere maps I used spherical harmonics. I also calculated aproximated visibility function per object against all other objects. This was so fast that even particles could be light emitters.

While I can see how image-based lighting might work great on mobile devices, how do these translate to desktops where the ALU/TEX ratio is much higher? I would guess that using literally abundant ALU to compute a dozen lights that are visible from one fragment may very well be faster than to do yet an additional texture lookup?

Just look up forward+ rendering.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

While I can see how image-based lighting might work great on mobile devices, how do these translate to desktops where the ALU/TEX ratio is much higher? I would guess that using literally abundant ALU to compute a dozen lights that are visible from one fragment may very well be faster than to do yet an additional texture lookup?

In console/PC games, it's standard to use IBL for background/fill/bounce/ambient lights (GI), instead of a flat ambient colour, and then compute the direct lighting analytically.

For film-quality IBL, you don't pre-convolve the probes, and each pixel has to read thousands of importance-sampled values from the IBL probes and integrate them using the BRDF (which is both ALU and TEX heavy)...

While I can see how image-based lighting might work great on mobile devices, how do these translate to desktops where the ALU/TEX ratio is much higher? I would guess that using literally abundant ALU to compute a dozen lights that are visible from one fragment may very well be faster than to do yet an additional texture lookup?

In console/PC games, it's standard to use IBL for background/fill/bounce/ambient lights (GI), instead of a flat ambient colour, and then compute the direct lighting analytically.

For film-quality IBL, you don't pre-convolve the probes, and each pixel has to read thousands of importance-sampled values from the IBL probes and integrate them using the BRDF (which is both ALU and TEX heavy)...

Which is why most people pre-convolve the probes. I'm not sure UE4 does, then again I'm not sure exactly what it is they're even doing. All I've gathered is "cube map array" and their ability to relight the probes in realtime.

This topic is closed to new replies.

Advertisement